00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1032 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3699 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.063 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.089 Using shallow fetch with depth 1 00:00:00.089 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.089 > git --version # timeout=10 00:00:00.119 > git --version # 'git version 2.39.2' 00:00:00.119 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.141 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.141 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.670 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.681 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.691 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.691 > git config core.sparsecheckout # timeout=10 00:00:04.700 > git read-tree -mu HEAD # timeout=10 00:00:04.714 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.734 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.734 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.821 [Pipeline] Start of Pipeline 00:00:04.839 [Pipeline] library 00:00:04.842 Loading library shm_lib@master 00:00:04.842 Library shm_lib@master is cached. Copying from home. 00:00:04.856 [Pipeline] node 00:00:04.867 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.869 [Pipeline] { 00:00:04.878 [Pipeline] catchError 00:00:04.879 [Pipeline] { 00:00:04.890 [Pipeline] wrap 00:00:04.899 [Pipeline] { 00:00:04.907 [Pipeline] stage 00:00:04.909 [Pipeline] { (Prologue) 00:00:04.930 [Pipeline] echo 00:00:04.931 Node: VM-host-SM0 00:00:04.938 [Pipeline] cleanWs 00:00:04.950 [WS-CLEANUP] Deleting project workspace... 00:00:04.950 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.956 [WS-CLEANUP] done 00:00:05.145 [Pipeline] setCustomBuildProperty 00:00:05.238 [Pipeline] httpRequest 00:00:06.537 [Pipeline] echo 00:00:06.539 Sorcerer 10.211.164.20 is alive 00:00:06.548 [Pipeline] retry 00:00:06.550 [Pipeline] { 00:00:06.560 [Pipeline] httpRequest 00:00:06.564 HttpMethod: GET 00:00:06.564 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.564 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.575 Response Code: HTTP/1.1 200 OK 00:00:06.575 Success: Status code 200 is in the accepted range: 200,404 00:00:06.576 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.606 [Pipeline] } 00:00:07.621 [Pipeline] // retry 00:00:07.627 [Pipeline] sh 00:00:07.908 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.923 [Pipeline] httpRequest 00:00:08.265 [Pipeline] echo 00:00:08.267 Sorcerer 10.211.164.20 is alive 00:00:08.277 [Pipeline] retry 00:00:08.279 [Pipeline] { 00:00:08.294 [Pipeline] httpRequest 00:00:08.299 HttpMethod: GET 00:00:08.299 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.300 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.313 Response Code: HTTP/1.1 200 OK 00:00:08.313 Success: Status code 200 is in the accepted range: 200,404 00:00:08.314 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:14.754 [Pipeline] } 00:01:14.772 [Pipeline] // retry 00:01:14.780 [Pipeline] sh 00:01:15.064 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:17.618 [Pipeline] sh 00:01:17.906 + git -C spdk log --oneline -n5 00:01:17.906 c13c99a5e test: Various fixes for Fedora40 00:01:17.906 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:17.906 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:17.906 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:17.906 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:17.934 [Pipeline] withCredentials 00:01:17.947 > git --version # timeout=10 00:01:17.960 > git --version # 'git version 2.39.2' 00:01:17.977 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:17.979 [Pipeline] { 00:01:17.990 [Pipeline] retry 00:01:17.992 [Pipeline] { 00:01:18.012 [Pipeline] sh 00:01:18.298 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:40.240 [Pipeline] } 00:01:40.257 [Pipeline] // retry 00:01:40.262 [Pipeline] } 00:01:40.277 [Pipeline] // withCredentials 00:01:40.287 [Pipeline] httpRequest 00:01:40.620 [Pipeline] echo 00:01:40.622 Sorcerer 10.211.164.20 is alive 00:01:40.629 [Pipeline] retry 00:01:40.630 [Pipeline] { 00:01:40.641 [Pipeline] httpRequest 00:01:40.645 HttpMethod: GET 00:01:40.646 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.646 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.647 Response Code: HTTP/1.1 200 OK 00:01:40.648 Success: Status code 200 is in the accepted range: 200,404 00:01:40.648 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:50.241 [Pipeline] } 00:01:50.257 [Pipeline] // retry 00:01:50.264 [Pipeline] sh 00:01:50.544 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:51.933 [Pipeline] sh 00:01:52.215 + git -C dpdk log --oneline -n5 00:01:52.215 eeb0605f11 version: 23.11.0 00:01:52.215 238778122a doc: update release notes for 23.11 00:01:52.215 46aa6b3cfc doc: fix description of RSS features 00:01:52.215 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:52.215 7e421ae345 devtools: support skipping forbid rule check 00:01:52.234 [Pipeline] writeFile 00:01:52.250 [Pipeline] sh 00:01:52.533 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:52.546 [Pipeline] sh 00:01:52.833 + cat autorun-spdk.conf 00:01:52.833 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.833 SPDK_TEST_NVMF=1 00:01:52.833 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.833 SPDK_TEST_USDT=1 00:01:52.833 SPDK_RUN_UBSAN=1 00:01:52.833 SPDK_TEST_NVMF_MDNS=1 00:01:52.833 NET_TYPE=virt 00:01:52.833 SPDK_JSONRPC_GO_CLIENT=1 00:01:52.833 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:52.833 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.833 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.841 RUN_NIGHTLY=1 00:01:52.843 [Pipeline] } 00:01:52.860 [Pipeline] // stage 00:01:52.881 [Pipeline] stage 00:01:52.883 [Pipeline] { (Run VM) 00:01:52.900 [Pipeline] sh 00:01:53.188 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:53.188 + echo 'Start stage prepare_nvme.sh' 00:01:53.188 Start stage prepare_nvme.sh 00:01:53.188 + [[ -n 6 ]] 00:01:53.188 + disk_prefix=ex6 00:01:53.188 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:53.188 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:53.188 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:53.188 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.188 ++ SPDK_TEST_NVMF=1 00:01:53.188 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.188 ++ SPDK_TEST_USDT=1 00:01:53.188 ++ SPDK_RUN_UBSAN=1 00:01:53.188 ++ SPDK_TEST_NVMF_MDNS=1 00:01:53.188 ++ NET_TYPE=virt 00:01:53.188 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:53.188 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:53.188 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:53.188 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.188 ++ RUN_NIGHTLY=1 00:01:53.188 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:53.188 + nvme_files=() 00:01:53.188 + declare -A nvme_files 00:01:53.188 + backend_dir=/var/lib/libvirt/images/backends 00:01:53.188 + nvme_files['nvme.img']=5G 00:01:53.188 + nvme_files['nvme-cmb.img']=5G 00:01:53.188 + nvme_files['nvme-multi0.img']=4G 00:01:53.188 + nvme_files['nvme-multi1.img']=4G 00:01:53.188 + nvme_files['nvme-multi2.img']=4G 00:01:53.188 + nvme_files['nvme-openstack.img']=8G 00:01:53.188 + nvme_files['nvme-zns.img']=5G 00:01:53.188 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:53.188 + (( SPDK_TEST_FTL == 1 )) 00:01:53.188 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:53.188 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:53.188 + for nvme in "${!nvme_files[@]}" 00:01:53.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:53.188 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.188 + for nvme in "${!nvme_files[@]}" 00:01:53.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:53.188 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:53.188 + for nvme in "${!nvme_files[@]}" 00:01:53.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:53.188 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:53.188 + for nvme in "${!nvme_files[@]}" 00:01:53.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:53.188 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:53.188 + for nvme in "${!nvme_files[@]}" 00:01:53.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:53.188 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.188 + for nvme in "${!nvme_files[@]}" 00:01:53.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:53.448 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.448 + for nvme in "${!nvme_files[@]}" 00:01:53.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:53.448 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:53.448 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:53.448 + echo 'End stage prepare_nvme.sh' 00:01:53.448 End stage prepare_nvme.sh 00:01:53.459 [Pipeline] sh 00:01:53.739 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:53.739 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:53.739 00:01:53.739 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:53.739 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:53.739 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:53.739 HELP=0 00:01:53.739 DRY_RUN=0 00:01:53.739 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:53.739 NVME_DISKS_TYPE=nvme,nvme, 00:01:53.739 NVME_AUTO_CREATE=0 00:01:53.739 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:53.739 NVME_CMB=,, 00:01:53.739 NVME_PMR=,, 00:01:53.739 NVME_ZNS=,, 00:01:53.739 NVME_MS=,, 00:01:53.739 NVME_FDP=,, 00:01:53.739 SPDK_VAGRANT_DISTRO=fedora39 00:01:53.739 SPDK_VAGRANT_VMCPU=10 00:01:53.739 SPDK_VAGRANT_VMRAM=12288 00:01:53.739 SPDK_VAGRANT_PROVIDER=libvirt 00:01:53.739 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:53.739 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:53.739 SPDK_OPENSTACK_NETWORK=0 00:01:53.739 VAGRANT_PACKAGE_BOX=0 00:01:53.739 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:53.739 FORCE_DISTRO=true 00:01:53.739 VAGRANT_BOX_VERSION= 00:01:53.739 EXTRA_VAGRANTFILES= 00:01:53.739 NIC_MODEL=e1000 00:01:53.739 00:01:53.739 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:53.739 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:56.271 Bringing machine 'default' up with 'libvirt' provider... 00:01:57.208 ==> default: Creating image (snapshot of base box volume). 00:01:57.468 ==> default: Creating domain with the following settings... 00:01:57.468 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733407682_aa41ce1b34b6aafce9db 00:01:57.468 ==> default: -- Domain type: kvm 00:01:57.468 ==> default: -- Cpus: 10 00:01:57.468 ==> default: -- Feature: acpi 00:01:57.468 ==> default: -- Feature: apic 00:01:57.468 ==> default: -- Feature: pae 00:01:57.468 ==> default: -- Memory: 12288M 00:01:57.468 ==> default: -- Memory Backing: hugepages: 00:01:57.468 ==> default: -- Management MAC: 00:01:57.468 ==> default: -- Loader: 00:01:57.468 ==> default: -- Nvram: 00:01:57.468 ==> default: -- Base box: spdk/fedora39 00:01:57.468 ==> default: -- Storage pool: default 00:01:57.468 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733407682_aa41ce1b34b6aafce9db.img (20G) 00:01:57.468 ==> default: -- Volume Cache: default 00:01:57.468 ==> default: -- Kernel: 00:01:57.468 ==> default: -- Initrd: 00:01:57.468 ==> default: -- Graphics Type: vnc 00:01:57.468 ==> default: -- Graphics Port: -1 00:01:57.468 ==> default: -- Graphics IP: 127.0.0.1 00:01:57.468 ==> default: -- Graphics Password: Not defined 00:01:57.468 ==> default: -- Video Type: cirrus 00:01:57.468 ==> default: -- Video VRAM: 9216 00:01:57.468 ==> default: -- Sound Type: 00:01:57.468 ==> default: -- Keymap: en-us 00:01:57.468 ==> default: -- TPM Path: 00:01:57.468 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:57.468 ==> default: -- Command line args: 00:01:57.468 ==> default: -> value=-device, 00:01:57.468 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:57.468 ==> default: -> value=-drive, 00:01:57.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:57.468 ==> default: -> value=-device, 00:01:57.468 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.468 ==> default: -> value=-device, 00:01:57.468 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:57.468 ==> default: -> value=-drive, 00:01:57.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:57.468 ==> default: -> value=-device, 00:01:57.468 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.468 ==> default: -> value=-drive, 00:01:57.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:57.468 ==> default: -> value=-device, 00:01:57.468 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.468 ==> default: -> value=-drive, 00:01:57.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:57.468 ==> default: -> value=-device, 00:01:57.468 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.727 ==> default: Creating shared folders metadata... 00:01:57.727 ==> default: Starting domain. 00:02:00.259 ==> default: Waiting for domain to get an IP address... 00:02:15.141 ==> default: Waiting for SSH to become available... 00:02:16.514 ==> default: Configuring and enabling network interfaces... 00:02:20.734 default: SSH address: 192.168.121.5:22 00:02:20.734 default: SSH username: vagrant 00:02:20.734 default: SSH auth method: private key 00:02:23.271 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.447 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:36.725 ==> default: Mounting SSHFS shared folder... 00:02:38.103 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:38.103 ==> default: Checking Mount.. 00:02:39.483 ==> default: Folder Successfully Mounted! 00:02:39.483 ==> default: Running provisioner: file... 00:02:40.417 default: ~/.gitconfig => .gitconfig 00:02:40.675 00:02:40.675 SUCCESS! 00:02:40.675 00:02:40.675 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:40.675 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:40.675 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:40.675 00:02:40.683 [Pipeline] } 00:02:40.698 [Pipeline] // stage 00:02:40.707 [Pipeline] dir 00:02:40.708 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:40.709 [Pipeline] { 00:02:40.721 [Pipeline] catchError 00:02:40.723 [Pipeline] { 00:02:40.736 [Pipeline] sh 00:02:41.016 + vagrant ssh-config --host vagrant 00:02:41.016 + sed -ne /^Host/,$p 00:02:41.016 + tee ssh_conf 00:02:43.545 Host vagrant 00:02:43.545 HostName 192.168.121.5 00:02:43.545 User vagrant 00:02:43.545 Port 22 00:02:43.545 UserKnownHostsFile /dev/null 00:02:43.545 StrictHostKeyChecking no 00:02:43.545 PasswordAuthentication no 00:02:43.545 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:43.545 IdentitiesOnly yes 00:02:43.545 LogLevel FATAL 00:02:43.545 ForwardAgent yes 00:02:43.545 ForwardX11 yes 00:02:43.545 00:02:43.558 [Pipeline] withEnv 00:02:43.560 [Pipeline] { 00:02:43.573 [Pipeline] sh 00:02:43.852 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:43.852 source /etc/os-release 00:02:43.852 [[ -e /image.version ]] && img=$(< /image.version) 00:02:43.852 # Minimal, systemd-like check. 00:02:43.852 if [[ -e /.dockerenv ]]; then 00:02:43.852 # Clear garbage from the node's name: 00:02:43.852 # agt-er_autotest_547-896 -> autotest_547-896 00:02:43.852 # $HOSTNAME is the actual container id 00:02:43.852 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:43.852 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:43.852 # We can assume this is a mount from a host where container is running, 00:02:43.852 # so fetch its hostname to easily identify the target swarm worker. 00:02:43.852 container="$(< /etc/hostname) ($agent)" 00:02:43.852 else 00:02:43.852 # Fallback 00:02:43.852 container=$agent 00:02:43.852 fi 00:02:43.852 fi 00:02:43.852 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:43.852 00:02:44.123 [Pipeline] } 00:02:44.143 [Pipeline] // withEnv 00:02:44.154 [Pipeline] setCustomBuildProperty 00:02:44.172 [Pipeline] stage 00:02:44.175 [Pipeline] { (Tests) 00:02:44.195 [Pipeline] sh 00:02:44.475 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:44.751 [Pipeline] sh 00:02:45.035 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:45.311 [Pipeline] timeout 00:02:45.311 Timeout set to expire in 1 hr 0 min 00:02:45.313 [Pipeline] { 00:02:45.330 [Pipeline] sh 00:02:45.610 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:46.179 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:46.190 [Pipeline] sh 00:02:46.470 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:46.742 [Pipeline] sh 00:02:47.023 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:47.297 [Pipeline] sh 00:02:47.578 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:47.837 ++ readlink -f spdk_repo 00:02:47.837 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:47.837 + [[ -n /home/vagrant/spdk_repo ]] 00:02:47.837 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:47.837 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:47.837 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:47.837 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:47.837 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:47.837 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:47.837 + cd /home/vagrant/spdk_repo 00:02:47.837 + source /etc/os-release 00:02:47.837 ++ NAME='Fedora Linux' 00:02:47.837 ++ VERSION='39 (Cloud Edition)' 00:02:47.837 ++ ID=fedora 00:02:47.837 ++ VERSION_ID=39 00:02:47.837 ++ VERSION_CODENAME= 00:02:47.837 ++ PLATFORM_ID=platform:f39 00:02:47.837 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:47.837 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:47.837 ++ LOGO=fedora-logo-icon 00:02:47.837 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:47.837 ++ HOME_URL=https://fedoraproject.org/ 00:02:47.837 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:47.837 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:47.837 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:47.837 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:47.837 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:47.837 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:47.837 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:47.837 ++ SUPPORT_END=2024-11-12 00:02:47.837 ++ VARIANT='Cloud Edition' 00:02:47.837 ++ VARIANT_ID=cloud 00:02:47.837 + uname -a 00:02:47.837 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:47.837 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:47.837 Hugepages 00:02:47.837 node hugesize free / total 00:02:47.837 node0 1048576kB 0 / 0 00:02:47.837 node0 2048kB 0 / 0 00:02:47.837 00:02:47.837 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.837 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:47.837 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:48.097 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:48.097 + rm -f /tmp/spdk-ld-path 00:02:48.097 + source autorun-spdk.conf 00:02:48.097 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:48.097 ++ SPDK_TEST_NVMF=1 00:02:48.097 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:48.097 ++ SPDK_TEST_USDT=1 00:02:48.097 ++ SPDK_RUN_UBSAN=1 00:02:48.097 ++ SPDK_TEST_NVMF_MDNS=1 00:02:48.097 ++ NET_TYPE=virt 00:02:48.097 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:48.097 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:48.097 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:48.097 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:48.097 ++ RUN_NIGHTLY=1 00:02:48.097 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:48.097 + [[ -n '' ]] 00:02:48.097 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:48.097 + for M in /var/spdk/build-*-manifest.txt 00:02:48.097 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:48.097 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:48.097 + for M in /var/spdk/build-*-manifest.txt 00:02:48.097 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:48.097 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:48.097 + for M in /var/spdk/build-*-manifest.txt 00:02:48.097 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:48.097 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:48.097 ++ uname 00:02:48.097 + [[ Linux == \L\i\n\u\x ]] 00:02:48.097 + sudo dmesg -T 00:02:48.097 + sudo dmesg --clear 00:02:48.097 + dmesg_pid=5963 00:02:48.097 + [[ Fedora Linux == FreeBSD ]] 00:02:48.097 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:48.097 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:48.097 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:48.097 + [[ -x /usr/src/fio-static/fio ]] 00:02:48.097 + sudo dmesg -Tw 00:02:48.097 + export FIO_BIN=/usr/src/fio-static/fio 00:02:48.097 + FIO_BIN=/usr/src/fio-static/fio 00:02:48.097 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:48.097 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:48.097 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:48.097 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:48.097 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:48.097 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:48.097 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:48.097 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:48.097 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:48.097 Test configuration: 00:02:48.097 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:48.097 SPDK_TEST_NVMF=1 00:02:48.097 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:48.097 SPDK_TEST_USDT=1 00:02:48.097 SPDK_RUN_UBSAN=1 00:02:48.097 SPDK_TEST_NVMF_MDNS=1 00:02:48.097 NET_TYPE=virt 00:02:48.097 SPDK_JSONRPC_GO_CLIENT=1 00:02:48.097 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:48.097 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:48.097 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:48.097 RUN_NIGHTLY=1 14:08:53 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:48.097 14:08:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:48.097 14:08:53 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:48.097 14:08:53 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:48.097 14:08:53 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:48.097 14:08:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.097 14:08:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.097 14:08:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.097 14:08:53 -- paths/export.sh@5 -- $ export PATH 00:02:48.097 14:08:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.097 14:08:53 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:48.097 14:08:53 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:48.097 14:08:53 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733407733.XXXXXX 00:02:48.097 14:08:53 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733407733.vDxy3l 00:02:48.097 14:08:53 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:48.097 14:08:53 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:48.097 14:08:53 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:48.357 14:08:53 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:48.357 14:08:53 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:48.357 14:08:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.357 14:08:53 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:48.357 14:08:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:48.357 14:08:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:48.357 14:08:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:48.357 14:08:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:48.357 Thu Dec 5 02:08:53 PM UTC 2024 00:02:48.357 14:08:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:48.357 LTS-67-gc13c99a5e 00:02:48.357 14:08:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:48.357 14:08:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:48.357 14:08:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:48.357 14:08:53 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:48.357 14:08:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:48.357 14:08:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.357 ************************************ 00:02:48.357 START TEST ubsan 00:02:48.357 ************************************ 00:02:48.357 using ubsan 00:02:48.357 14:08:53 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:48.357 00:02:48.357 real 0m0.000s 00:02:48.357 user 0m0.000s 00:02:48.357 sys 0m0.000s 00:02:48.357 14:08:53 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:48.357 14:08:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.357 ************************************ 00:02:48.357 END TEST ubsan 00:02:48.357 ************************************ 00:02:48.357 14:08:53 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:48.357 14:08:53 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:48.357 14:08:53 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:48.357 14:08:53 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:48.357 14:08:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:48.357 14:08:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.357 ************************************ 00:02:48.357 START TEST build_native_dpdk 00:02:48.357 ************************************ 00:02:48.357 14:08:53 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:48.357 14:08:53 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:48.357 14:08:53 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:48.357 14:08:53 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:48.357 14:08:53 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:48.357 14:08:53 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:48.357 14:08:53 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:48.357 14:08:53 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:48.357 14:08:53 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:48.357 14:08:53 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:48.357 14:08:53 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:48.357 14:08:53 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:48.357 14:08:53 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:48.357 14:08:53 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:48.357 14:08:53 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:48.357 14:08:53 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:48.357 14:08:53 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:48.357 14:08:53 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:48.357 eeb0605f11 version: 23.11.0 00:02:48.357 238778122a doc: update release notes for 23.11 00:02:48.357 46aa6b3cfc doc: fix description of RSS features 00:02:48.357 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:48.357 7e421ae345 devtools: support skipping forbid rule check 00:02:48.357 14:08:53 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:48.357 14:08:53 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:48.357 14:08:53 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:48.357 14:08:53 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:48.357 14:08:53 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:48.357 14:08:53 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:48.357 14:08:53 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:48.357 14:08:53 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:48.357 14:08:53 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:48.357 14:08:53 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:48.357 14:08:53 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:48.357 14:08:53 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:48.357 14:08:53 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:48.357 14:08:53 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:48.357 14:08:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:48.357 14:08:53 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:48.357 14:08:53 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:48.357 14:08:53 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:48.357 14:08:53 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:48.357 14:08:53 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:48.357 14:08:53 -- scripts/common.sh@343 -- $ case "$op" in 00:02:48.357 14:08:53 -- scripts/common.sh@344 -- $ : 1 00:02:48.357 14:08:53 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:48.357 14:08:53 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.357 14:08:53 -- scripts/common.sh@364 -- $ decimal 23 00:02:48.357 14:08:53 -- scripts/common.sh@352 -- $ local d=23 00:02:48.357 14:08:53 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:48.357 14:08:53 -- scripts/common.sh@354 -- $ echo 23 00:02:48.357 14:08:53 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:48.357 14:08:53 -- scripts/common.sh@365 -- $ decimal 21 00:02:48.357 14:08:53 -- scripts/common.sh@352 -- $ local d=21 00:02:48.357 14:08:53 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:48.357 14:08:53 -- scripts/common.sh@354 -- $ echo 21 00:02:48.357 14:08:53 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:48.357 14:08:53 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:48.357 14:08:53 -- scripts/common.sh@366 -- $ return 1 00:02:48.357 14:08:53 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:48.357 patching file config/rte_config.h 00:02:48.357 Hunk #1 succeeded at 60 (offset 1 line). 00:02:48.357 14:08:53 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:48.357 14:08:53 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:48.357 14:08:53 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:48.357 14:08:53 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:48.357 14:08:53 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:48.357 14:08:53 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:48.357 14:08:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:48.357 14:08:53 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:48.357 14:08:53 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:48.357 14:08:53 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:48.357 14:08:53 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:48.357 14:08:53 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:48.357 14:08:53 -- scripts/common.sh@343 -- $ case "$op" in 00:02:48.357 14:08:53 -- scripts/common.sh@344 -- $ : 1 00:02:48.357 14:08:53 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:48.357 14:08:53 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.357 14:08:53 -- scripts/common.sh@364 -- $ decimal 23 00:02:48.357 14:08:53 -- scripts/common.sh@352 -- $ local d=23 00:02:48.358 14:08:53 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:48.358 14:08:53 -- scripts/common.sh@354 -- $ echo 23 00:02:48.358 14:08:53 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:48.358 14:08:53 -- scripts/common.sh@365 -- $ decimal 24 00:02:48.358 14:08:53 -- scripts/common.sh@352 -- $ local d=24 00:02:48.358 14:08:53 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.358 14:08:53 -- scripts/common.sh@354 -- $ echo 24 00:02:48.358 14:08:53 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:48.358 14:08:53 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:48.358 14:08:53 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:48.358 14:08:53 -- scripts/common.sh@367 -- $ return 0 00:02:48.358 14:08:53 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:48.358 patching file lib/pcapng/rte_pcapng.c 00:02:48.358 14:08:53 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:48.358 14:08:53 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:48.358 14:08:53 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:48.358 14:08:53 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:48.358 14:08:53 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:53.627 The Meson build system 00:02:53.627 Version: 1.5.0 00:02:53.627 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:53.627 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:53.627 Build type: native build 00:02:53.627 Program cat found: YES (/usr/bin/cat) 00:02:53.627 Project name: DPDK 00:02:53.627 Project version: 23.11.0 00:02:53.627 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.627 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:53.627 Host machine cpu family: x86_64 00:02:53.627 Host machine cpu: x86_64 00:02:53.627 Message: ## Building in Developer Mode ## 00:02:53.627 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.627 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:53.627 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.627 Program python3 found: YES (/usr/bin/python3) 00:02:53.627 Program cat found: YES (/usr/bin/cat) 00:02:53.627 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:53.627 Compiler for C supports arguments -march=native: YES 00:02:53.627 Checking for size of "void *" : 8 00:02:53.627 Checking for size of "void *" : 8 (cached) 00:02:53.627 Library m found: YES 00:02:53.627 Library numa found: YES 00:02:53.627 Has header "numaif.h" : YES 00:02:53.627 Library fdt found: NO 00:02:53.627 Library execinfo found: NO 00:02:53.627 Has header "execinfo.h" : YES 00:02:53.627 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.627 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.627 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.627 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.627 Run-time dependency openssl found: YES 3.1.1 00:02:53.627 Run-time dependency libpcap found: YES 1.10.4 00:02:53.627 Has header "pcap.h" with dependency libpcap: YES 00:02:53.627 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.627 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.627 Compiler for C supports arguments -Wformat: YES 00:02:53.627 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.627 Compiler for C supports arguments -Wformat-security: NO 00:02:53.627 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.627 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.627 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.627 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.627 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.627 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.627 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.627 Compiler for C supports arguments -Wundef: YES 00:02:53.627 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.627 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.627 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.627 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.627 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:53.627 Program objdump found: YES (/usr/bin/objdump) 00:02:53.627 Compiler for C supports arguments -mavx512f: YES 00:02:53.627 Checking if "AVX512 checking" compiles: YES 00:02:53.627 Fetching value of define "__SSE4_2__" : 1 00:02:53.627 Fetching value of define "__AES__" : 1 00:02:53.627 Fetching value of define "__AVX__" : 1 00:02:53.627 Fetching value of define "__AVX2__" : 1 00:02:53.627 Fetching value of define "__AVX512BW__" : (undefined) 00:02:53.627 Fetching value of define "__AVX512CD__" : (undefined) 00:02:53.627 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:53.627 Fetching value of define "__AVX512F__" : (undefined) 00:02:53.627 Fetching value of define "__AVX512VL__" : (undefined) 00:02:53.627 Fetching value of define "__PCLMUL__" : 1 00:02:53.627 Fetching value of define "__RDRND__" : 1 00:02:53.627 Fetching value of define "__RDSEED__" : 1 00:02:53.627 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.627 Fetching value of define "__znver1__" : (undefined) 00:02:53.627 Fetching value of define "__znver2__" : (undefined) 00:02:53.627 Fetching value of define "__znver3__" : (undefined) 00:02:53.627 Fetching value of define "__znver4__" : (undefined) 00:02:53.627 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.627 Message: lib/log: Defining dependency "log" 00:02:53.627 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.627 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.627 Checking for function "getentropy" : NO 00:02:53.627 Message: lib/eal: Defining dependency "eal" 00:02:53.627 Message: lib/ring: Defining dependency "ring" 00:02:53.627 Message: lib/rcu: Defining dependency "rcu" 00:02:53.627 Message: lib/mempool: Defining dependency "mempool" 00:02:53.627 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.627 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.627 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.627 Compiler for C supports arguments -mpclmul: YES 00:02:53.627 Compiler for C supports arguments -maes: YES 00:02:53.627 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.627 Compiler for C supports arguments -mavx512bw: YES 00:02:53.627 Compiler for C supports arguments -mavx512dq: YES 00:02:53.627 Compiler for C supports arguments -mavx512vl: YES 00:02:53.627 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.627 Compiler for C supports arguments -mavx2: YES 00:02:53.627 Compiler for C supports arguments -mavx: YES 00:02:53.627 Message: lib/net: Defining dependency "net" 00:02:53.627 Message: lib/meter: Defining dependency "meter" 00:02:53.627 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.627 Message: lib/pci: Defining dependency "pci" 00:02:53.627 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.627 Message: lib/metrics: Defining dependency "metrics" 00:02:53.627 Message: lib/hash: Defining dependency "hash" 00:02:53.627 Message: lib/timer: Defining dependency "timer" 00:02:53.627 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.627 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:53.627 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:53.627 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:53.627 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:53.627 Message: lib/acl: Defining dependency "acl" 00:02:53.627 Message: lib/bbdev: Defining dependency "bbdev" 00:02:53.627 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:53.627 Run-time dependency libelf found: YES 0.191 00:02:53.627 Message: lib/bpf: Defining dependency "bpf" 00:02:53.627 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:53.627 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.627 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.627 Message: lib/distributor: Defining dependency "distributor" 00:02:53.627 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.627 Message: lib/efd: Defining dependency "efd" 00:02:53.627 Message: lib/eventdev: Defining dependency "eventdev" 00:02:53.627 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:53.627 Message: lib/gpudev: Defining dependency "gpudev" 00:02:53.627 Message: lib/gro: Defining dependency "gro" 00:02:53.627 Message: lib/gso: Defining dependency "gso" 00:02:53.627 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:53.627 Message: lib/jobstats: Defining dependency "jobstats" 00:02:53.627 Message: lib/latencystats: Defining dependency "latencystats" 00:02:53.627 Message: lib/lpm: Defining dependency "lpm" 00:02:53.627 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.627 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:53.627 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:53.627 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:53.627 Message: lib/member: Defining dependency "member" 00:02:53.627 Message: lib/pcapng: Defining dependency "pcapng" 00:02:53.628 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.628 Message: lib/power: Defining dependency "power" 00:02:53.628 Message: lib/rawdev: Defining dependency "rawdev" 00:02:53.628 Message: lib/regexdev: Defining dependency "regexdev" 00:02:53.628 Message: lib/mldev: Defining dependency "mldev" 00:02:53.628 Message: lib/rib: Defining dependency "rib" 00:02:53.628 Message: lib/reorder: Defining dependency "reorder" 00:02:53.628 Message: lib/sched: Defining dependency "sched" 00:02:53.628 Message: lib/security: Defining dependency "security" 00:02:53.628 Message: lib/stack: Defining dependency "stack" 00:02:53.628 Has header "linux/userfaultfd.h" : YES 00:02:53.628 Has header "linux/vduse.h" : YES 00:02:53.628 Message: lib/vhost: Defining dependency "vhost" 00:02:53.628 Message: lib/ipsec: Defining dependency "ipsec" 00:02:53.628 Message: lib/pdcp: Defining dependency "pdcp" 00:02:53.628 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.628 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:53.628 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:53.628 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:53.628 Message: lib/fib: Defining dependency "fib" 00:02:53.628 Message: lib/port: Defining dependency "port" 00:02:53.628 Message: lib/pdump: Defining dependency "pdump" 00:02:53.628 Message: lib/table: Defining dependency "table" 00:02:53.628 Message: lib/pipeline: Defining dependency "pipeline" 00:02:53.628 Message: lib/graph: Defining dependency "graph" 00:02:53.628 Message: lib/node: Defining dependency "node" 00:02:53.628 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:55.557 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:55.557 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:55.557 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:55.557 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:55.557 Compiler for C supports arguments -Wno-unused-value: YES 00:02:55.557 Compiler for C supports arguments -Wno-format: YES 00:02:55.557 Compiler for C supports arguments -Wno-format-security: YES 00:02:55.557 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:55.557 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:55.557 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:55.557 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:55.557 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:55.558 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:55.558 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:55.558 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:55.558 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:55.558 Has header "sys/epoll.h" : YES 00:02:55.558 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:55.558 Configuring doxy-api-html.conf using configuration 00:02:55.558 Configuring doxy-api-man.conf using configuration 00:02:55.558 Program mandb found: YES (/usr/bin/mandb) 00:02:55.558 Program sphinx-build found: NO 00:02:55.558 Configuring rte_build_config.h using configuration 00:02:55.558 Message: 00:02:55.558 ================= 00:02:55.558 Applications Enabled 00:02:55.558 ================= 00:02:55.558 00:02:55.558 apps: 00:02:55.558 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:55.558 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:55.558 test-pmd, test-regex, test-sad, test-security-perf, 00:02:55.558 00:02:55.558 Message: 00:02:55.558 ================= 00:02:55.558 Libraries Enabled 00:02:55.558 ================= 00:02:55.558 00:02:55.558 libs: 00:02:55.558 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:55.558 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:55.558 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:55.558 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:55.558 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:55.558 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:55.558 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:55.558 00:02:55.558 00:02:55.558 Message: 00:02:55.558 =============== 00:02:55.558 Drivers Enabled 00:02:55.558 =============== 00:02:55.558 00:02:55.558 common: 00:02:55.558 00:02:55.558 bus: 00:02:55.558 pci, vdev, 00:02:55.558 mempool: 00:02:55.558 ring, 00:02:55.558 dma: 00:02:55.558 00:02:55.558 net: 00:02:55.558 i40e, 00:02:55.558 raw: 00:02:55.558 00:02:55.558 crypto: 00:02:55.558 00:02:55.558 compress: 00:02:55.558 00:02:55.558 regex: 00:02:55.558 00:02:55.558 ml: 00:02:55.558 00:02:55.558 vdpa: 00:02:55.558 00:02:55.558 event: 00:02:55.558 00:02:55.558 baseband: 00:02:55.558 00:02:55.558 gpu: 00:02:55.558 00:02:55.558 00:02:55.558 Message: 00:02:55.558 ================= 00:02:55.558 Content Skipped 00:02:55.558 ================= 00:02:55.558 00:02:55.558 apps: 00:02:55.558 00:02:55.558 libs: 00:02:55.558 00:02:55.558 drivers: 00:02:55.558 common/cpt: not in enabled drivers build config 00:02:55.558 common/dpaax: not in enabled drivers build config 00:02:55.558 common/iavf: not in enabled drivers build config 00:02:55.558 common/idpf: not in enabled drivers build config 00:02:55.558 common/mvep: not in enabled drivers build config 00:02:55.558 common/octeontx: not in enabled drivers build config 00:02:55.558 bus/auxiliary: not in enabled drivers build config 00:02:55.558 bus/cdx: not in enabled drivers build config 00:02:55.558 bus/dpaa: not in enabled drivers build config 00:02:55.558 bus/fslmc: not in enabled drivers build config 00:02:55.558 bus/ifpga: not in enabled drivers build config 00:02:55.558 bus/platform: not in enabled drivers build config 00:02:55.558 bus/vmbus: not in enabled drivers build config 00:02:55.558 common/cnxk: not in enabled drivers build config 00:02:55.558 common/mlx5: not in enabled drivers build config 00:02:55.558 common/nfp: not in enabled drivers build config 00:02:55.558 common/qat: not in enabled drivers build config 00:02:55.558 common/sfc_efx: not in enabled drivers build config 00:02:55.558 mempool/bucket: not in enabled drivers build config 00:02:55.558 mempool/cnxk: not in enabled drivers build config 00:02:55.558 mempool/dpaa: not in enabled drivers build config 00:02:55.558 mempool/dpaa2: not in enabled drivers build config 00:02:55.558 mempool/octeontx: not in enabled drivers build config 00:02:55.558 mempool/stack: not in enabled drivers build config 00:02:55.558 dma/cnxk: not in enabled drivers build config 00:02:55.558 dma/dpaa: not in enabled drivers build config 00:02:55.558 dma/dpaa2: not in enabled drivers build config 00:02:55.558 dma/hisilicon: not in enabled drivers build config 00:02:55.558 dma/idxd: not in enabled drivers build config 00:02:55.558 dma/ioat: not in enabled drivers build config 00:02:55.558 dma/skeleton: not in enabled drivers build config 00:02:55.558 net/af_packet: not in enabled drivers build config 00:02:55.558 net/af_xdp: not in enabled drivers build config 00:02:55.558 net/ark: not in enabled drivers build config 00:02:55.558 net/atlantic: not in enabled drivers build config 00:02:55.558 net/avp: not in enabled drivers build config 00:02:55.558 net/axgbe: not in enabled drivers build config 00:02:55.558 net/bnx2x: not in enabled drivers build config 00:02:55.558 net/bnxt: not in enabled drivers build config 00:02:55.558 net/bonding: not in enabled drivers build config 00:02:55.558 net/cnxk: not in enabled drivers build config 00:02:55.558 net/cpfl: not in enabled drivers build config 00:02:55.558 net/cxgbe: not in enabled drivers build config 00:02:55.558 net/dpaa: not in enabled drivers build config 00:02:55.558 net/dpaa2: not in enabled drivers build config 00:02:55.558 net/e1000: not in enabled drivers build config 00:02:55.558 net/ena: not in enabled drivers build config 00:02:55.558 net/enetc: not in enabled drivers build config 00:02:55.558 net/enetfec: not in enabled drivers build config 00:02:55.558 net/enic: not in enabled drivers build config 00:02:55.558 net/failsafe: not in enabled drivers build config 00:02:55.558 net/fm10k: not in enabled drivers build config 00:02:55.558 net/gve: not in enabled drivers build config 00:02:55.558 net/hinic: not in enabled drivers build config 00:02:55.558 net/hns3: not in enabled drivers build config 00:02:55.558 net/iavf: not in enabled drivers build config 00:02:55.558 net/ice: not in enabled drivers build config 00:02:55.558 net/idpf: not in enabled drivers build config 00:02:55.558 net/igc: not in enabled drivers build config 00:02:55.558 net/ionic: not in enabled drivers build config 00:02:55.558 net/ipn3ke: not in enabled drivers build config 00:02:55.558 net/ixgbe: not in enabled drivers build config 00:02:55.558 net/mana: not in enabled drivers build config 00:02:55.558 net/memif: not in enabled drivers build config 00:02:55.558 net/mlx4: not in enabled drivers build config 00:02:55.558 net/mlx5: not in enabled drivers build config 00:02:55.558 net/mvneta: not in enabled drivers build config 00:02:55.558 net/mvpp2: not in enabled drivers build config 00:02:55.558 net/netvsc: not in enabled drivers build config 00:02:55.558 net/nfb: not in enabled drivers build config 00:02:55.558 net/nfp: not in enabled drivers build config 00:02:55.558 net/ngbe: not in enabled drivers build config 00:02:55.558 net/null: not in enabled drivers build config 00:02:55.558 net/octeontx: not in enabled drivers build config 00:02:55.558 net/octeon_ep: not in enabled drivers build config 00:02:55.558 net/pcap: not in enabled drivers build config 00:02:55.558 net/pfe: not in enabled drivers build config 00:02:55.558 net/qede: not in enabled drivers build config 00:02:55.558 net/ring: not in enabled drivers build config 00:02:55.558 net/sfc: not in enabled drivers build config 00:02:55.558 net/softnic: not in enabled drivers build config 00:02:55.558 net/tap: not in enabled drivers build config 00:02:55.558 net/thunderx: not in enabled drivers build config 00:02:55.558 net/txgbe: not in enabled drivers build config 00:02:55.558 net/vdev_netvsc: not in enabled drivers build config 00:02:55.558 net/vhost: not in enabled drivers build config 00:02:55.558 net/virtio: not in enabled drivers build config 00:02:55.558 net/vmxnet3: not in enabled drivers build config 00:02:55.558 raw/cnxk_bphy: not in enabled drivers build config 00:02:55.558 raw/cnxk_gpio: not in enabled drivers build config 00:02:55.558 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:55.558 raw/ifpga: not in enabled drivers build config 00:02:55.558 raw/ntb: not in enabled drivers build config 00:02:55.558 raw/skeleton: not in enabled drivers build config 00:02:55.558 crypto/armv8: not in enabled drivers build config 00:02:55.558 crypto/bcmfs: not in enabled drivers build config 00:02:55.558 crypto/caam_jr: not in enabled drivers build config 00:02:55.558 crypto/ccp: not in enabled drivers build config 00:02:55.558 crypto/cnxk: not in enabled drivers build config 00:02:55.558 crypto/dpaa_sec: not in enabled drivers build config 00:02:55.558 crypto/dpaa2_sec: not in enabled drivers build config 00:02:55.558 crypto/ipsec_mb: not in enabled drivers build config 00:02:55.558 crypto/mlx5: not in enabled drivers build config 00:02:55.558 crypto/mvsam: not in enabled drivers build config 00:02:55.558 crypto/nitrox: not in enabled drivers build config 00:02:55.558 crypto/null: not in enabled drivers build config 00:02:55.558 crypto/octeontx: not in enabled drivers build config 00:02:55.558 crypto/openssl: not in enabled drivers build config 00:02:55.558 crypto/scheduler: not in enabled drivers build config 00:02:55.558 crypto/uadk: not in enabled drivers build config 00:02:55.558 crypto/virtio: not in enabled drivers build config 00:02:55.558 compress/isal: not in enabled drivers build config 00:02:55.558 compress/mlx5: not in enabled drivers build config 00:02:55.558 compress/octeontx: not in enabled drivers build config 00:02:55.558 compress/zlib: not in enabled drivers build config 00:02:55.558 regex/mlx5: not in enabled drivers build config 00:02:55.558 regex/cn9k: not in enabled drivers build config 00:02:55.558 ml/cnxk: not in enabled drivers build config 00:02:55.558 vdpa/ifc: not in enabled drivers build config 00:02:55.558 vdpa/mlx5: not in enabled drivers build config 00:02:55.558 vdpa/nfp: not in enabled drivers build config 00:02:55.558 vdpa/sfc: not in enabled drivers build config 00:02:55.558 event/cnxk: not in enabled drivers build config 00:02:55.558 event/dlb2: not in enabled drivers build config 00:02:55.559 event/dpaa: not in enabled drivers build config 00:02:55.559 event/dpaa2: not in enabled drivers build config 00:02:55.559 event/dsw: not in enabled drivers build config 00:02:55.559 event/opdl: not in enabled drivers build config 00:02:55.559 event/skeleton: not in enabled drivers build config 00:02:55.559 event/sw: not in enabled drivers build config 00:02:55.559 event/octeontx: not in enabled drivers build config 00:02:55.559 baseband/acc: not in enabled drivers build config 00:02:55.559 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:55.559 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:55.559 baseband/la12xx: not in enabled drivers build config 00:02:55.559 baseband/null: not in enabled drivers build config 00:02:55.559 baseband/turbo_sw: not in enabled drivers build config 00:02:55.559 gpu/cuda: not in enabled drivers build config 00:02:55.559 00:02:55.559 00:02:55.559 Build targets in project: 220 00:02:55.559 00:02:55.559 DPDK 23.11.0 00:02:55.559 00:02:55.559 User defined options 00:02:55.559 libdir : lib 00:02:55.559 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:55.559 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:55.559 c_link_args : 00:02:55.559 enable_docs : false 00:02:55.559 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:55.559 enable_kmods : false 00:02:55.559 machine : native 00:02:55.559 tests : false 00:02:55.559 00:02:55.559 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.559 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:55.559 14:09:00 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:55.559 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:55.559 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.559 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.559 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:55.559 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:55.559 [5/710] Linking static target lib/librte_kvargs.a 00:02:55.559 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.559 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:55.817 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:55.817 [9/710] Linking static target lib/librte_log.a 00:02:55.817 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.817 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.075 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:56.075 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.075 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:56.075 [15/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:56.075 [16/710] Linking target lib/librte_log.so.24.0 00:02:56.334 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:56.334 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:56.334 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.334 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.592 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:56.593 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.593 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:56.593 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.593 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:56.851 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.851 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.851 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.851 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:56.851 [30/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:56.851 [31/710] Linking static target lib/librte_telemetry.a 00:02:57.110 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:57.110 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:57.110 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:57.369 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.369 [36/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.369 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:57.369 [38/710] Linking target lib/librte_telemetry.so.24.0 00:02:57.369 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:57.369 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.369 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.369 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.369 [43/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:57.369 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.628 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.628 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.628 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.887 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.887 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.887 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.887 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:58.146 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:58.146 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.146 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.146 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.146 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.146 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.405 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.405 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.405 [60/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.405 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.405 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.405 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.405 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.664 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.664 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.664 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.664 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.923 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.923 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.923 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.923 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.923 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.923 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.923 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.923 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.923 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.182 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.182 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:59.442 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.442 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.442 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.712 [83/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:59.712 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.712 [85/710] Linking static target lib/librte_ring.a 00:02:59.712 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.712 [87/710] Linking static target lib/librte_eal.a 00:02:59.969 [88/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.969 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.969 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.969 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.969 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.227 [93/710] Linking static target lib/librte_mempool.a 00:03:00.227 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.227 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.227 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:00.227 [97/710] Linking static target lib/librte_rcu.a 00:03:00.485 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:00.485 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:00.485 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.485 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.485 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.744 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.744 [104/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.744 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.744 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.003 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:01.003 [108/710] Linking static target lib/librte_mbuf.a 00:03:01.003 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:01.003 [110/710] Linking static target lib/librte_net.a 00:03:01.003 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:01.003 [112/710] Linking static target lib/librte_meter.a 00:03:01.261 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:01.261 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.261 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.261 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:01.261 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:01.261 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.520 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.089 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.089 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.089 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.349 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.349 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.349 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.349 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.349 [127/710] Linking static target lib/librte_pci.a 00:03:02.349 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.349 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.608 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.608 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.608 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.608 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.608 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.608 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.608 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.608 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.608 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.608 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.867 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.867 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:02.867 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.126 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.126 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.126 [145/710] Linking static target lib/librte_cmdline.a 00:03:03.385 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.385 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:03.385 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:03.385 [149/710] Linking static target lib/librte_metrics.a 00:03:03.385 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.645 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.904 [152/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.904 [153/710] Linking static target lib/librte_timer.a 00:03:03.904 [154/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:03.904 [155/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.164 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.423 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:04.423 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:04.683 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:04.683 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:05.251 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:05.251 [162/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:05.251 [163/710] Linking static target lib/librte_ethdev.a 00:03:05.251 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:05.251 [165/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:05.251 [166/710] Linking static target lib/librte_bitratestats.a 00:03:05.510 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:05.510 [168/710] Linking static target lib/librte_bbdev.a 00:03:05.510 [169/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.510 [170/710] Linking static target lib/librte_hash.a 00:03:05.510 [171/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.510 [172/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.510 [173/710] Linking target lib/librte_eal.so.24.0 00:03:05.770 [174/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:05.770 [175/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:05.770 [176/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:05.770 [177/710] Linking static target lib/acl/libavx2_tmp.a 00:03:05.770 [178/710] Linking target lib/librte_meter.so.24.0 00:03:05.770 [179/710] Linking target lib/librte_ring.so.24.0 00:03:05.770 [180/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:05.770 [181/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:05.770 [182/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:06.029 [183/710] Linking target lib/librte_timer.so.24.0 00:03:06.029 [184/710] Linking target lib/librte_pci.so.24.0 00:03:06.029 [185/710] Linking target lib/librte_rcu.so.24.0 00:03:06.029 [186/710] Linking target lib/librte_mempool.so.24.0 00:03:06.029 [187/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.029 [188/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:06.029 [189/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.029 [190/710] Linking static target lib/acl/libavx512_tmp.a 00:03:06.029 [191/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:06.029 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:06.029 [193/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:06.029 [194/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:06.029 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:06.029 [196/710] Linking target lib/librte_mbuf.so.24.0 00:03:06.289 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:06.289 [198/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:06.289 [199/710] Linking static target lib/librte_acl.a 00:03:06.289 [200/710] Linking target lib/librte_net.so.24.0 00:03:06.289 [201/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:06.549 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:06.549 [203/710] Linking target lib/librte_bbdev.so.24.0 00:03:06.549 [204/710] Linking target lib/librte_cmdline.so.24.0 00:03:06.549 [205/710] Linking static target lib/librte_cfgfile.a 00:03:06.549 [206/710] Linking target lib/librte_hash.so.24.0 00:03:06.549 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:06.549 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:06.549 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.549 [210/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:06.549 [211/710] Linking target lib/librte_acl.so.24.0 00:03:06.808 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:06.808 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:06.808 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.808 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:03:06.808 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:07.067 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:07.067 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:07.326 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:07.326 [220/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:07.326 [221/710] Linking static target lib/librte_bpf.a 00:03:07.326 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:07.326 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:07.326 [224/710] Linking static target lib/librte_compressdev.a 00:03:07.585 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:07.585 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.585 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:07.844 [228/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.844 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:07.844 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:07.844 [231/710] Linking target lib/librte_compressdev.so.24.0 00:03:07.844 [232/710] Linking static target lib/librte_distributor.a 00:03:07.844 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.103 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.103 [235/710] Linking target lib/librte_distributor.so.24.0 00:03:08.103 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:08.103 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:08.103 [238/710] Linking static target lib/librte_dmadev.a 00:03:08.670 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.670 [240/710] Linking target lib/librte_dmadev.so.24.0 00:03:08.670 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:08.670 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:08.670 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:08.928 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:08.928 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:08.928 [246/710] Linking static target lib/librte_efd.a 00:03:08.928 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:09.186 [248/710] Linking static target lib/librte_cryptodev.a 00:03:09.186 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.186 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:09.186 [251/710] Linking target lib/librte_efd.so.24.0 00:03:09.445 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.445 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:09.445 [254/710] Linking target lib/librte_ethdev.so.24.0 00:03:09.445 [255/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:09.445 [256/710] Linking static target lib/librte_dispatcher.a 00:03:09.702 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:09.702 [258/710] Linking target lib/librte_metrics.so.24.0 00:03:09.702 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:09.702 [260/710] Linking target lib/librte_bpf.so.24.0 00:03:09.702 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:09.702 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:03:09.960 [263/710] Linking static target lib/librte_gpudev.a 00:03:09.960 [264/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:09.960 [265/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:09.960 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:09.960 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [268/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:10.218 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.218 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:10.218 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:03:10.218 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:10.218 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:10.477 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:10.477 [275/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:10.477 [276/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.477 [277/710] Linking static target lib/librte_eventdev.a 00:03:10.477 [278/710] Linking target lib/librte_gpudev.so.24.0 00:03:10.736 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:10.736 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:10.736 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:10.736 [282/710] Linking static target lib/librte_gro.a 00:03:10.736 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:10.736 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:10.994 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:10.994 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:10.994 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.994 [288/710] Linking target lib/librte_gro.so.24.0 00:03:11.253 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:11.253 [290/710] Linking static target lib/librte_gso.a 00:03:11.253 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.253 [292/710] Linking target lib/librte_gso.so.24.0 00:03:11.253 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:11.511 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:11.511 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:11.511 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:11.511 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:11.511 [298/710] Linking static target lib/librte_jobstats.a 00:03:11.511 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:11.769 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:11.769 [301/710] Linking static target lib/librte_latencystats.a 00:03:11.769 [302/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:11.769 [303/710] Linking static target lib/librte_ip_frag.a 00:03:11.769 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.769 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:11.769 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.027 [307/710] Linking target lib/librte_latencystats.so.24.0 00:03:12.027 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.027 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:03:12.027 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:12.027 [311/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:12.027 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:12.027 [313/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:12.027 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:12.027 [315/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:12.285 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:12.285 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:12.543 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.543 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:12.543 [320/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:12.543 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:12.543 [322/710] Linking static target lib/librte_lpm.a 00:03:12.543 [323/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:12.543 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:03:12.802 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:12.802 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:12.802 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:12.802 [328/710] Linking static target lib/librte_pcapng.a 00:03:12.802 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:12.802 [330/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:13.060 [331/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.060 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:13.060 [333/710] Linking target lib/librte_lpm.so.24.0 00:03:13.060 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.060 [335/710] Linking target lib/librte_pcapng.so.24.0 00:03:13.060 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:13.060 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:13.060 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:13.318 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:13.318 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:13.577 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:13.577 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:13.577 [343/710] Linking static target lib/librte_power.a 00:03:13.577 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:13.577 [345/710] Linking static target lib/librte_rawdev.a 00:03:13.577 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:13.577 [347/710] Linking static target lib/librte_member.a 00:03:13.577 [348/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:13.577 [349/710] Linking static target lib/librte_regexdev.a 00:03:13.577 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:13.836 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:13.836 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:13.836 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:13.836 [354/710] Linking static target lib/librte_mldev.a 00:03:13.836 [355/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.095 [356/710] Linking target lib/librte_member.so.24.0 00:03:14.095 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.095 [358/710] Linking target lib/librte_rawdev.so.24.0 00:03:14.095 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:14.095 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.095 [361/710] Linking target lib/librte_power.so.24.0 00:03:14.095 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:14.354 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.354 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:14.354 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:14.354 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:14.354 [367/710] Linking static target lib/librte_reorder.a 00:03:14.613 [368/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:14.613 [369/710] Linking static target lib/librte_rib.a 00:03:14.613 [370/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.613 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:14.613 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:14.613 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:14.613 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.871 [375/710] Linking target lib/librte_reorder.so.24.0 00:03:14.871 [376/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:14.871 [377/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:14.871 [378/710] Linking static target lib/librte_stack.a 00:03:14.872 [379/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:14.872 [380/710] Linking static target lib/librte_security.a 00:03:14.872 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.872 [382/710] Linking target lib/librte_rib.so.24.0 00:03:15.130 [383/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.130 [384/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.130 [385/710] Linking target lib/librte_mldev.so.24.0 00:03:15.130 [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:15.130 [387/710] Linking target lib/librte_stack.so.24.0 00:03:15.389 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.389 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.389 [390/710] Linking target lib/librte_security.so.24.0 00:03:15.389 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:15.389 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:15.389 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.648 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:15.648 [395/710] Linking static target lib/librte_sched.a 00:03:15.907 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:15.907 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.907 [398/710] Linking target lib/librte_sched.so.24.0 00:03:15.907 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:16.166 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:16.166 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.166 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:16.425 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:16.425 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.685 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:16.685 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:16.944 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:16.944 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:16.944 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:17.204 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:17.204 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:17.204 [412/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:17.204 [413/710] Linking static target lib/librte_ipsec.a 00:03:17.463 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.463 [415/710] Linking target lib/librte_ipsec.so.24.0 00:03:17.463 [416/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:17.463 [417/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:17.463 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:17.463 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:17.463 [420/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:17.463 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:17.463 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:17.721 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:18.290 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:18.290 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:18.290 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:18.290 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:18.549 [428/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:18.549 [429/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:18.549 [430/710] Linking static target lib/librte_fib.a 00:03:18.549 [431/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:18.549 [432/710] Linking static target lib/librte_pdcp.a 00:03:18.808 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.808 [434/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.808 [435/710] Linking target lib/librte_fib.so.24.0 00:03:18.808 [436/710] Linking target lib/librte_pdcp.so.24.0 00:03:18.808 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:19.376 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:19.376 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:19.376 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:19.376 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:19.376 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:19.633 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:19.633 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:19.891 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:19.891 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:19.891 [447/710] Linking static target lib/librte_port.a 00:03:20.149 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:20.149 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:20.149 [450/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.149 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:20.149 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:20.149 [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.407 [454/710] Linking target lib/librte_port.so.24.0 00:03:20.407 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:20.407 [456/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:20.407 [457/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:20.407 [458/710] Linking static target lib/librte_pdump.a 00:03:20.407 [459/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:20.665 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.666 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:20.666 [462/710] Linking target lib/librte_pdump.so.24.0 00:03:20.923 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:20.923 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:21.181 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:21.181 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:21.181 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:21.181 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:21.440 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:21.440 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:21.440 [471/710] Linking static target lib/librte_table.a 00:03:21.699 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:21.699 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:21.958 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:21.958 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.216 [476/710] Linking target lib/librte_table.so.24.0 00:03:22.216 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:22.216 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:22.475 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:22.475 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:22.746 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:22.746 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:23.023 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:23.023 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:23.023 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:23.023 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:23.597 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:23.597 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:23.597 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:23.597 [490/710] Linking static target lib/librte_graph.a 00:03:23.597 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:23.597 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:23.855 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:24.114 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.114 [495/710] Linking target lib/librte_graph.so.24.0 00:03:24.114 [496/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:24.114 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:24.114 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:24.114 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:24.682 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:24.683 [501/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:24.683 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:24.683 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:24.683 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:24.941 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:24.941 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:24.941 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:25.199 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:25.457 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:25.457 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:25.457 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:25.457 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:25.457 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:25.457 [514/710] Linking static target lib/librte_node.a 00:03:25.457 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:25.715 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.715 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:25.715 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:25.974 [519/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:25.974 [520/710] Linking target lib/librte_node.so.24.0 00:03:25.974 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:25.974 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:25.974 [523/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:25.974 [524/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:25.974 [525/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.232 [526/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.232 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:26.232 [528/710] Linking static target drivers/librte_bus_vdev.a 00:03:26.492 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:26.492 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.492 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:26.492 [532/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.492 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:26.492 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:26.492 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.492 [536/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.492 [537/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:26.751 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.751 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:26.751 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.751 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.751 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:26.751 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.751 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:26.751 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:27.011 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:27.270 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:27.270 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:27.529 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:27.529 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:27.529 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:28.468 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:28.468 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:28.468 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:28.468 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:28.468 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:28.468 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:28.728 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:28.987 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:29.247 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:29.247 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:29.247 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:29.814 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:29.814 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:29.814 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:30.072 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:30.072 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:30.331 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:30.331 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:30.331 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:30.331 [571/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:30.590 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:30.590 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:30.849 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:30.849 [575/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:30.849 [576/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.849 [577/710] Linking static target lib/librte_vhost.a 00:03:30.849 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:31.109 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:31.109 [580/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:31.109 [581/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:31.109 [582/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:31.368 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:31.368 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:31.368 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:31.368 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:31.368 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:31.368 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:31.626 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:31.626 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:31.626 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:31.885 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:31.885 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.143 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:32.143 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.143 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:32.143 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:32.143 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:32.402 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:32.661 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:32.661 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:32.919 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:32.919 [603/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:32.919 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:32.919 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:32.919 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:33.178 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:33.437 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:33.695 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:33.695 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:33.695 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:33.695 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:33.695 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:33.955 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:33.955 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:33.955 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:33.955 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:34.214 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:34.214 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:34.473 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:34.732 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:34.732 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:34.732 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:35.300 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:35.300 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:35.559 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:35.559 [627/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:35.559 [628/710] Linking static target lib/librte_pipeline.a 00:03:35.560 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:35.560 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:35.819 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:35.819 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:36.078 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:36.078 [634/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:36.078 [635/710] Linking target app/dpdk-graph 00:03:36.078 [636/710] Linking target app/dpdk-dumpcap 00:03:36.078 [637/710] Linking target app/dpdk-pdump 00:03:36.337 [638/710] Linking target app/dpdk-proc-info 00:03:36.337 [639/710] Linking target app/dpdk-test-acl 00:03:36.596 [640/710] Linking target app/dpdk-test-cmdline 00:03:36.596 [641/710] Linking target app/dpdk-test-crypto-perf 00:03:36.596 [642/710] Linking target app/dpdk-test-compress-perf 00:03:36.596 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:36.596 [644/710] Linking target app/dpdk-test-dma-perf 00:03:36.596 [645/710] Linking target app/dpdk-test-fib 00:03:36.596 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:36.854 [647/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:36.855 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:37.114 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:37.114 [650/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:37.114 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:37.114 [652/710] Linking target app/dpdk-test-flow-perf 00:03:37.373 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:37.373 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:37.373 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:37.373 [656/710] Linking target app/dpdk-test-eventdev 00:03:37.632 [657/710] Linking target app/dpdk-test-gpudev 00:03:37.632 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:37.632 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:37.891 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:37.891 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:37.891 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:37.891 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:37.891 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:38.150 [665/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.150 [666/710] Linking target lib/librte_pipeline.so.24.0 00:03:38.150 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:38.150 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:38.410 [669/710] Linking target app/dpdk-test-bbdev 00:03:38.410 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:38.410 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:38.410 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:38.410 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:38.978 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:38.978 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:38.978 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:38.978 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:38.978 [678/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:39.237 [679/710] Linking target app/dpdk-test-pipeline 00:03:39.237 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:39.496 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:39.496 [682/710] Linking target app/dpdk-test-mldev 00:03:39.496 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:40.062 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:40.062 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:40.062 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:40.062 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:40.062 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:40.320 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:40.320 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:40.580 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:40.580 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:40.839 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:40.839 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:41.406 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:41.406 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:41.406 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:41.665 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:41.665 [699/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:41.665 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:41.665 [701/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:41.923 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:41.923 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:41.923 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:41.923 [705/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:42.181 [706/710] Linking target app/dpdk-test-sad 00:03:42.181 [707/710] Linking target app/dpdk-test-regex 00:03:42.438 [708/710] Linking target app/dpdk-testpmd 00:03:42.438 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:42.727 [710/710] Linking target app/dpdk-test-security-perf 00:03:42.984 14:09:48 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:42.984 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:42.984 [0/1] Installing files. 00:03:43.245 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:43.245 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.246 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.247 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:43.248 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:43.248 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.248 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.249 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.817 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.817 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.817 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.817 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:43.817 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.817 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:43.817 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.818 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:43.818 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:43.818 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:43.818 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.818 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.819 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.820 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:43.821 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:43.821 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:43.821 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:43.821 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:43.821 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:43.821 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:43.821 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:43.821 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:43.821 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:43.821 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:43.821 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:43.821 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:43.821 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:43.821 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:43.821 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:43.821 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:43.821 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:43.821 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:43.821 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:43.821 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:43.821 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:43.821 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:43.821 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:43.821 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:43.821 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:43.821 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:43.821 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:43.821 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:43.821 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:43.821 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:43.821 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:43.821 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:43.821 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:43.821 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:43.821 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:43.821 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:43.821 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:43.821 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:43.821 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:43.821 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:43.821 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:43.821 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:43.821 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:43.821 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:43.821 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:43.821 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:43.821 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:43.821 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:43.821 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:43.821 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:43.821 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:43.821 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:43.821 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:43.821 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:43.821 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:43.821 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:43.821 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:43.821 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:43.821 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:43.821 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:43.821 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:43.821 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:43.821 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:43.821 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:43.821 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:43.821 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:43.821 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:43.821 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:43.821 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:43.821 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:43.821 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:43.821 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:43.821 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:43.821 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:43.821 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:43.822 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:43.822 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:43.822 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:43.822 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:43.822 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:43.822 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:43.822 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:43.822 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:43.822 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:43.822 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:43.822 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:43.822 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:43.822 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:43.822 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:43.822 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:43.822 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:43.822 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:43.822 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:43.822 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:43.822 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:43.822 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:43.822 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:43.822 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:43.822 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:43.822 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:43.822 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:43.822 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:43.822 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:43.822 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:43.822 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:43.822 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:43.822 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:43.822 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:43.822 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:43.822 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:43.822 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:43.822 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:43.822 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:43.822 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:43.822 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:43.822 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:43.822 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:43.822 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:43.822 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:43.822 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:43.822 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:43.822 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:43.822 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:43.822 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:43.822 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:43.822 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:43.822 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:43.822 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:43.822 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:43.822 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:43.822 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:43.822 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:43.822 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:43.822 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:43.822 14:09:49 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:43.822 14:09:49 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:43.822 14:09:49 -- common/autobuild_common.sh@203 -- $ cat 00:03:43.822 14:09:49 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:43.822 00:03:43.822 real 0m55.558s 00:03:43.822 user 6m36.775s 00:03:43.822 sys 1m6.329s 00:03:43.822 14:09:49 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:43.822 14:09:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.822 ************************************ 00:03:43.822 END TEST build_native_dpdk 00:03:43.822 ************************************ 00:03:43.822 14:09:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:43.822 14:09:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:43.822 14:09:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:43.822 14:09:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:43.822 14:09:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:43.822 14:09:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:43.822 14:09:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:43.822 14:09:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:44.081 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:44.081 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.081 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:44.081 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:44.648 Using 'verbs' RDMA provider 00:04:00.156 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:04:12.366 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:12.625 go version go1.21.1 linux/amd64 00:04:12.883 Creating mk/config.mk...done. 00:04:12.883 Creating mk/cc.flags.mk...done. 00:04:12.883 Type 'make' to build. 00:04:12.883 14:10:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:12.883 14:10:18 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:12.883 14:10:18 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:12.883 14:10:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:12.883 ************************************ 00:04:12.883 START TEST make 00:04:12.883 ************************************ 00:04:12.883 14:10:18 -- common/autotest_common.sh@1114 -- $ make -j10 00:04:13.141 make[1]: Nothing to be done for 'all'. 00:04:35.070 CC lib/log/log.o 00:04:35.070 CC lib/log/log_flags.o 00:04:35.070 CC lib/log/log_deprecated.o 00:04:35.070 CC lib/ut/ut.o 00:04:35.070 CC lib/ut_mock/mock.o 00:04:35.070 LIB libspdk_ut_mock.a 00:04:35.070 LIB libspdk_log.a 00:04:35.070 LIB libspdk_ut.a 00:04:35.070 SO libspdk_ut_mock.so.5.0 00:04:35.070 SO libspdk_log.so.6.1 00:04:35.070 SO libspdk_ut.so.1.0 00:04:35.070 SYMLINK libspdk_ut_mock.so 00:04:35.070 SYMLINK libspdk_ut.so 00:04:35.070 SYMLINK libspdk_log.so 00:04:35.070 CC lib/util/base64.o 00:04:35.070 CC lib/util/bit_array.o 00:04:35.070 CC lib/util/crc32.o 00:04:35.070 CC lib/util/cpuset.o 00:04:35.070 CC lib/util/crc16.o 00:04:35.070 CC lib/util/crc32c.o 00:04:35.070 CC lib/ioat/ioat.o 00:04:35.070 CC lib/dma/dma.o 00:04:35.070 CXX lib/trace_parser/trace.o 00:04:35.070 CC lib/vfio_user/host/vfio_user_pci.o 00:04:35.070 CC lib/util/crc32_ieee.o 00:04:35.070 CC lib/util/crc64.o 00:04:35.070 CC lib/util/dif.o 00:04:35.070 CC lib/vfio_user/host/vfio_user.o 00:04:35.070 CC lib/util/fd.o 00:04:35.070 LIB libspdk_dma.a 00:04:35.070 CC lib/util/file.o 00:04:35.070 CC lib/util/hexlify.o 00:04:35.070 SO libspdk_dma.so.3.0 00:04:35.070 CC lib/util/iov.o 00:04:35.070 SYMLINK libspdk_dma.so 00:04:35.070 CC lib/util/math.o 00:04:35.070 LIB libspdk_ioat.a 00:04:35.070 CC lib/util/pipe.o 00:04:35.070 CC lib/util/strerror_tls.o 00:04:35.070 SO libspdk_ioat.so.6.0 00:04:35.070 LIB libspdk_vfio_user.a 00:04:35.328 CC lib/util/string.o 00:04:35.328 SO libspdk_vfio_user.so.4.0 00:04:35.328 SYMLINK libspdk_ioat.so 00:04:35.329 CC lib/util/fd_group.o 00:04:35.329 CC lib/util/uuid.o 00:04:35.329 SYMLINK libspdk_vfio_user.so 00:04:35.329 CC lib/util/xor.o 00:04:35.329 CC lib/util/zipf.o 00:04:35.329 LIB libspdk_util.a 00:04:35.587 SO libspdk_util.so.8.0 00:04:35.587 SYMLINK libspdk_util.so 00:04:35.587 LIB libspdk_trace_parser.a 00:04:35.845 SO libspdk_trace_parser.so.4.0 00:04:35.845 CC lib/json/json_parse.o 00:04:35.845 CC lib/json/json_util.o 00:04:35.845 CC lib/json/json_write.o 00:04:35.845 CC lib/vmd/vmd.o 00:04:35.845 CC lib/rdma/common.o 00:04:35.845 CC lib/rdma/rdma_verbs.o 00:04:35.845 CC lib/idxd/idxd.o 00:04:35.845 CC lib/env_dpdk/env.o 00:04:35.845 CC lib/conf/conf.o 00:04:35.845 SYMLINK libspdk_trace_parser.so 00:04:35.845 CC lib/idxd/idxd_user.o 00:04:35.845 CC lib/vmd/led.o 00:04:36.105 CC lib/env_dpdk/memory.o 00:04:36.105 LIB libspdk_conf.a 00:04:36.105 CC lib/idxd/idxd_kernel.o 00:04:36.105 SO libspdk_conf.so.5.0 00:04:36.105 CC lib/env_dpdk/pci.o 00:04:36.105 LIB libspdk_rdma.a 00:04:36.105 LIB libspdk_json.a 00:04:36.105 SO libspdk_rdma.so.5.0 00:04:36.105 SO libspdk_json.so.5.1 00:04:36.105 SYMLINK libspdk_conf.so 00:04:36.105 CC lib/env_dpdk/init.o 00:04:36.105 SYMLINK libspdk_rdma.so 00:04:36.105 CC lib/env_dpdk/threads.o 00:04:36.105 SYMLINK libspdk_json.so 00:04:36.105 CC lib/env_dpdk/pci_ioat.o 00:04:36.105 CC lib/env_dpdk/pci_virtio.o 00:04:36.105 CC lib/env_dpdk/pci_vmd.o 00:04:36.364 CC lib/env_dpdk/pci_idxd.o 00:04:36.364 CC lib/env_dpdk/pci_event.o 00:04:36.364 CC lib/env_dpdk/sigbus_handler.o 00:04:36.364 LIB libspdk_idxd.a 00:04:36.364 CC lib/env_dpdk/pci_dpdk.o 00:04:36.364 SO libspdk_idxd.so.11.0 00:04:36.364 LIB libspdk_vmd.a 00:04:36.364 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:36.364 SO libspdk_vmd.so.5.0 00:04:36.364 SYMLINK libspdk_idxd.so 00:04:36.364 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:36.364 SYMLINK libspdk_vmd.so 00:04:36.622 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:36.622 CC lib/jsonrpc/jsonrpc_client.o 00:04:36.622 CC lib/jsonrpc/jsonrpc_server.o 00:04:36.622 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:36.622 LIB libspdk_jsonrpc.a 00:04:36.881 SO libspdk_jsonrpc.so.5.1 00:04:36.881 SYMLINK libspdk_jsonrpc.so 00:04:36.881 LIB libspdk_env_dpdk.a 00:04:36.881 CC lib/rpc/rpc.o 00:04:37.139 SO libspdk_env_dpdk.so.13.0 00:04:37.139 SYMLINK libspdk_env_dpdk.so 00:04:37.139 LIB libspdk_rpc.a 00:04:37.139 SO libspdk_rpc.so.5.0 00:04:37.139 SYMLINK libspdk_rpc.so 00:04:37.397 CC lib/sock/sock_rpc.o 00:04:37.397 CC lib/sock/sock.o 00:04:37.397 CC lib/trace/trace.o 00:04:37.397 CC lib/trace/trace_flags.o 00:04:37.397 CC lib/trace/trace_rpc.o 00:04:37.397 CC lib/notify/notify.o 00:04:37.397 CC lib/notify/notify_rpc.o 00:04:37.655 LIB libspdk_notify.a 00:04:37.655 SO libspdk_notify.so.5.0 00:04:37.655 SYMLINK libspdk_notify.so 00:04:37.655 LIB libspdk_trace.a 00:04:37.655 SO libspdk_trace.so.9.0 00:04:37.915 SYMLINK libspdk_trace.so 00:04:37.915 LIB libspdk_sock.a 00:04:37.915 SO libspdk_sock.so.8.0 00:04:37.915 SYMLINK libspdk_sock.so 00:04:37.915 CC lib/thread/thread.o 00:04:37.915 CC lib/thread/iobuf.o 00:04:38.175 CC lib/nvme/nvme_ctrlr.o 00:04:38.175 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:38.175 CC lib/nvme/nvme_fabric.o 00:04:38.175 CC lib/nvme/nvme_qpair.o 00:04:38.175 CC lib/nvme/nvme_ns_cmd.o 00:04:38.175 CC lib/nvme/nvme_pcie_common.o 00:04:38.175 CC lib/nvme/nvme_ns.o 00:04:38.175 CC lib/nvme/nvme_pcie.o 00:04:38.435 CC lib/nvme/nvme.o 00:04:39.004 CC lib/nvme/nvme_quirks.o 00:04:39.004 CC lib/nvme/nvme_transport.o 00:04:39.004 CC lib/nvme/nvme_discovery.o 00:04:39.004 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:39.004 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:39.004 CC lib/nvme/nvme_tcp.o 00:04:39.263 CC lib/nvme/nvme_opal.o 00:04:39.263 CC lib/nvme/nvme_io_msg.o 00:04:39.523 CC lib/nvme/nvme_poll_group.o 00:04:39.523 LIB libspdk_thread.a 00:04:39.523 SO libspdk_thread.so.9.0 00:04:39.523 CC lib/nvme/nvme_zns.o 00:04:39.523 CC lib/nvme/nvme_cuse.o 00:04:39.523 SYMLINK libspdk_thread.so 00:04:39.523 CC lib/nvme/nvme_vfio_user.o 00:04:39.523 CC lib/nvme/nvme_rdma.o 00:04:39.788 CC lib/accel/accel.o 00:04:39.788 CC lib/blob/blobstore.o 00:04:39.788 CC lib/blob/request.o 00:04:40.047 CC lib/blob/zeroes.o 00:04:40.047 CC lib/blob/blob_bs_dev.o 00:04:40.047 CC lib/accel/accel_rpc.o 00:04:40.047 CC lib/accel/accel_sw.o 00:04:40.047 CC lib/init/json_config.o 00:04:40.305 CC lib/virtio/virtio.o 00:04:40.305 CC lib/virtio/virtio_vhost_user.o 00:04:40.305 CC lib/virtio/virtio_vfio_user.o 00:04:40.305 CC lib/init/subsystem.o 00:04:40.305 CC lib/init/subsystem_rpc.o 00:04:40.305 CC lib/init/rpc.o 00:04:40.564 CC lib/virtio/virtio_pci.o 00:04:40.564 LIB libspdk_init.a 00:04:40.564 SO libspdk_init.so.4.0 00:04:40.564 LIB libspdk_accel.a 00:04:40.564 SYMLINK libspdk_init.so 00:04:40.564 SO libspdk_accel.so.14.0 00:04:40.822 SYMLINK libspdk_accel.so 00:04:40.822 LIB libspdk_virtio.a 00:04:40.822 SO libspdk_virtio.so.6.0 00:04:40.822 CC lib/event/reactor.o 00:04:40.822 CC lib/event/app.o 00:04:40.822 CC lib/event/log_rpc.o 00:04:40.822 CC lib/event/app_rpc.o 00:04:40.822 CC lib/event/scheduler_static.o 00:04:40.822 LIB libspdk_nvme.a 00:04:40.822 SYMLINK libspdk_virtio.so 00:04:40.822 CC lib/bdev/bdev.o 00:04:40.822 CC lib/bdev/bdev_rpc.o 00:04:40.822 CC lib/bdev/bdev_zone.o 00:04:40.822 CC lib/bdev/part.o 00:04:40.822 CC lib/bdev/scsi_nvme.o 00:04:41.081 SO libspdk_nvme.so.12.0 00:04:41.081 LIB libspdk_event.a 00:04:41.338 SO libspdk_event.so.12.0 00:04:41.338 SYMLINK libspdk_nvme.so 00:04:41.338 SYMLINK libspdk_event.so 00:04:42.285 LIB libspdk_blob.a 00:04:42.285 SO libspdk_blob.so.10.1 00:04:42.285 SYMLINK libspdk_blob.so 00:04:42.543 CC lib/lvol/lvol.o 00:04:42.543 CC lib/blobfs/blobfs.o 00:04:42.543 CC lib/blobfs/tree.o 00:04:43.474 LIB libspdk_lvol.a 00:04:43.474 SO libspdk_lvol.so.9.1 00:04:43.474 LIB libspdk_bdev.a 00:04:43.474 LIB libspdk_blobfs.a 00:04:43.474 SO libspdk_bdev.so.14.0 00:04:43.474 SYMLINK libspdk_lvol.so 00:04:43.474 SO libspdk_blobfs.so.9.0 00:04:43.474 SYMLINK libspdk_bdev.so 00:04:43.474 SYMLINK libspdk_blobfs.so 00:04:43.474 CC lib/nvmf/ctrlr.o 00:04:43.474 CC lib/nvmf/ctrlr_discovery.o 00:04:43.474 CC lib/nvmf/ctrlr_bdev.o 00:04:43.474 CC lib/nvmf/nvmf_rpc.o 00:04:43.474 CC lib/nvmf/subsystem.o 00:04:43.474 CC lib/nvmf/nvmf.o 00:04:43.474 CC lib/scsi/dev.o 00:04:43.474 CC lib/ublk/ublk.o 00:04:43.474 CC lib/ftl/ftl_core.o 00:04:43.474 CC lib/nbd/nbd.o 00:04:44.039 CC lib/scsi/lun.o 00:04:44.039 CC lib/ftl/ftl_init.o 00:04:44.039 CC lib/nbd/nbd_rpc.o 00:04:44.039 CC lib/scsi/port.o 00:04:44.039 CC lib/scsi/scsi.o 00:04:44.298 CC lib/ublk/ublk_rpc.o 00:04:44.298 LIB libspdk_nbd.a 00:04:44.298 CC lib/ftl/ftl_layout.o 00:04:44.298 CC lib/scsi/scsi_bdev.o 00:04:44.298 SO libspdk_nbd.so.6.0 00:04:44.298 CC lib/scsi/scsi_pr.o 00:04:44.298 CC lib/scsi/scsi_rpc.o 00:04:44.298 CC lib/nvmf/transport.o 00:04:44.298 SYMLINK libspdk_nbd.so 00:04:44.298 CC lib/nvmf/tcp.o 00:04:44.298 LIB libspdk_ublk.a 00:04:44.298 SO libspdk_ublk.so.2.0 00:04:44.556 SYMLINK libspdk_ublk.so 00:04:44.556 CC lib/nvmf/rdma.o 00:04:44.556 CC lib/ftl/ftl_debug.o 00:04:44.556 CC lib/ftl/ftl_io.o 00:04:44.556 CC lib/scsi/task.o 00:04:44.556 CC lib/ftl/ftl_sb.o 00:04:44.556 CC lib/ftl/ftl_l2p.o 00:04:44.556 CC lib/ftl/ftl_l2p_flat.o 00:04:44.814 CC lib/ftl/ftl_nv_cache.o 00:04:44.814 CC lib/ftl/ftl_band.o 00:04:44.814 LIB libspdk_scsi.a 00:04:44.814 CC lib/ftl/ftl_band_ops.o 00:04:44.814 SO libspdk_scsi.so.8.0 00:04:44.814 CC lib/ftl/ftl_writer.o 00:04:44.814 CC lib/ftl/ftl_rq.o 00:04:44.814 SYMLINK libspdk_scsi.so 00:04:44.814 CC lib/ftl/ftl_reloc.o 00:04:44.814 CC lib/ftl/ftl_l2p_cache.o 00:04:45.072 CC lib/ftl/ftl_p2l.o 00:04:45.072 CC lib/ftl/mngt/ftl_mngt.o 00:04:45.072 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.072 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.072 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.332 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.332 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.332 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.332 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.332 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.332 CC lib/iscsi/conn.o 00:04:45.332 CC lib/iscsi/init_grp.o 00:04:45.591 CC lib/vhost/vhost.o 00:04:45.591 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.591 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.591 CC lib/vhost/vhost_rpc.o 00:04:45.591 CC lib/vhost/vhost_scsi.o 00:04:45.591 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.591 CC lib/vhost/vhost_blk.o 00:04:45.591 CC lib/vhost/rte_vhost_user.o 00:04:45.591 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.591 CC lib/iscsi/iscsi.o 00:04:45.850 CC lib/iscsi/md5.o 00:04:45.850 CC lib/iscsi/param.o 00:04:46.108 CC lib/ftl/utils/ftl_conf.o 00:04:46.109 CC lib/iscsi/portal_grp.o 00:04:46.109 CC lib/iscsi/tgt_node.o 00:04:46.109 CC lib/iscsi/iscsi_subsystem.o 00:04:46.109 CC lib/ftl/utils/ftl_md.o 00:04:46.109 LIB libspdk_nvmf.a 00:04:46.368 CC lib/iscsi/iscsi_rpc.o 00:04:46.368 CC lib/iscsi/task.o 00:04:46.368 SO libspdk_nvmf.so.17.0 00:04:46.368 CC lib/ftl/utils/ftl_mempool.o 00:04:46.368 CC lib/ftl/utils/ftl_bitmap.o 00:04:46.368 SYMLINK libspdk_nvmf.so 00:04:46.368 CC lib/ftl/utils/ftl_property.o 00:04:46.628 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:46.628 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:46.628 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.628 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.628 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.628 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.628 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.628 LIB libspdk_vhost.a 00:04:46.628 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.628 SO libspdk_vhost.so.7.1 00:04:46.888 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.888 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.888 CC lib/ftl/base/ftl_base_dev.o 00:04:46.888 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.888 CC lib/ftl/ftl_trace.o 00:04:46.888 SYMLINK libspdk_vhost.so 00:04:46.888 LIB libspdk_ftl.a 00:04:47.147 LIB libspdk_iscsi.a 00:04:47.147 SO libspdk_iscsi.so.7.0 00:04:47.147 SO libspdk_ftl.so.8.0 00:04:47.147 SYMLINK libspdk_iscsi.so 00:04:47.406 SYMLINK libspdk_ftl.so 00:04:47.665 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.665 CC module/blob/bdev/blob_bdev.o 00:04:47.665 CC module/sock/posix/posix.o 00:04:47.665 CC module/accel/ioat/accel_ioat.o 00:04:47.665 CC module/accel/iaa/accel_iaa.o 00:04:47.665 CC module/accel/error/accel_error.o 00:04:47.665 CC module/accel/dsa/accel_dsa.o 00:04:47.665 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.665 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.665 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.924 LIB libspdk_env_dpdk_rpc.a 00:04:47.924 SO libspdk_env_dpdk_rpc.so.5.0 00:04:47.924 LIB libspdk_scheduler_gscheduler.a 00:04:47.924 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.924 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.924 CC module/accel/error/accel_error_rpc.o 00:04:47.924 SO libspdk_scheduler_gscheduler.so.3.0 00:04:47.924 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.924 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.924 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:47.924 LIB libspdk_scheduler_dynamic.a 00:04:47.924 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.924 SO libspdk_scheduler_dynamic.so.3.0 00:04:47.924 SYMLINK libspdk_scheduler_gscheduler.so 00:04:47.924 LIB libspdk_blob_bdev.a 00:04:47.924 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.924 SO libspdk_blob_bdev.so.10.1 00:04:47.924 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.924 LIB libspdk_accel_ioat.a 00:04:47.924 SYMLINK libspdk_blob_bdev.so 00:04:47.924 LIB libspdk_accel_error.a 00:04:47.924 LIB libspdk_accel_dsa.a 00:04:47.924 LIB libspdk_accel_iaa.a 00:04:47.924 SO libspdk_accel_ioat.so.5.0 00:04:48.183 SO libspdk_accel_error.so.1.0 00:04:48.183 SO libspdk_accel_dsa.so.4.0 00:04:48.183 SO libspdk_accel_iaa.so.2.0 00:04:48.183 SYMLINK libspdk_accel_ioat.so 00:04:48.183 SYMLINK libspdk_accel_error.so 00:04:48.183 SYMLINK libspdk_accel_dsa.so 00:04:48.183 SYMLINK libspdk_accel_iaa.so 00:04:48.183 CC module/bdev/delay/vbdev_delay.o 00:04:48.183 CC module/bdev/error/vbdev_error.o 00:04:48.183 CC module/blobfs/bdev/blobfs_bdev.o 00:04:48.183 CC module/bdev/gpt/gpt.o 00:04:48.183 CC module/bdev/lvol/vbdev_lvol.o 00:04:48.183 CC module/bdev/malloc/bdev_malloc.o 00:04:48.184 CC module/bdev/null/bdev_null.o 00:04:48.184 CC module/bdev/nvme/bdev_nvme.o 00:04:48.184 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.443 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:48.443 CC module/bdev/gpt/vbdev_gpt.o 00:04:48.443 LIB libspdk_sock_posix.a 00:04:48.443 SO libspdk_sock_posix.so.5.0 00:04:48.443 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.443 CC module/bdev/null/bdev_null_rpc.o 00:04:48.443 SYMLINK libspdk_sock_posix.so 00:04:48.443 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.443 LIB libspdk_blobfs_bdev.a 00:04:48.443 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.443 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.443 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.702 SO libspdk_blobfs_bdev.so.5.0 00:04:48.702 LIB libspdk_bdev_error.a 00:04:48.702 LIB libspdk_bdev_gpt.a 00:04:48.702 SO libspdk_bdev_error.so.5.0 00:04:48.702 LIB libspdk_bdev_null.a 00:04:48.702 SYMLINK libspdk_blobfs_bdev.so 00:04:48.702 SO libspdk_bdev_gpt.so.5.0 00:04:48.702 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.702 SO libspdk_bdev_null.so.5.0 00:04:48.702 SYMLINK libspdk_bdev_error.so 00:04:48.702 CC module/bdev/nvme/nvme_rpc.o 00:04:48.702 LIB libspdk_bdev_malloc.a 00:04:48.702 LIB libspdk_bdev_passthru.a 00:04:48.702 LIB libspdk_bdev_delay.a 00:04:48.702 SYMLINK libspdk_bdev_gpt.so 00:04:48.702 SO libspdk_bdev_passthru.so.5.0 00:04:48.702 SO libspdk_bdev_malloc.so.5.0 00:04:48.702 SYMLINK libspdk_bdev_null.so 00:04:48.702 SO libspdk_bdev_delay.so.5.0 00:04:48.703 CC module/bdev/raid/bdev_raid.o 00:04:48.703 SYMLINK libspdk_bdev_malloc.so 00:04:48.703 SYMLINK libspdk_bdev_delay.so 00:04:48.703 SYMLINK libspdk_bdev_passthru.so 00:04:48.962 CC module/bdev/nvme/bdev_mdns_client.o 00:04:48.962 CC module/bdev/split/vbdev_split.o 00:04:48.962 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.962 CC module/bdev/aio/bdev_aio.o 00:04:48.962 CC module/bdev/ftl/bdev_ftl.o 00:04:48.962 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:48.962 LIB libspdk_bdev_lvol.a 00:04:48.962 SO libspdk_bdev_lvol.so.5.0 00:04:48.962 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.962 CC module/bdev/nvme/vbdev_opal.o 00:04:48.962 CC module/bdev/raid/bdev_raid_rpc.o 00:04:49.221 SYMLINK libspdk_bdev_lvol.so 00:04:49.221 CC module/bdev/raid/bdev_raid_sb.o 00:04:49.221 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:49.221 LIB libspdk_bdev_ftl.a 00:04:49.221 LIB libspdk_bdev_split.a 00:04:49.221 SO libspdk_bdev_ftl.so.5.0 00:04:49.221 CC module/bdev/aio/bdev_aio_rpc.o 00:04:49.221 SO libspdk_bdev_split.so.5.0 00:04:49.221 LIB libspdk_bdev_zone_block.a 00:04:49.221 SYMLINK libspdk_bdev_ftl.so 00:04:49.221 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:49.221 SYMLINK libspdk_bdev_split.so 00:04:49.221 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:49.221 SO libspdk_bdev_zone_block.so.5.0 00:04:49.221 CC module/bdev/iscsi/bdev_iscsi.o 00:04:49.221 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:49.221 CC module/bdev/raid/raid0.o 00:04:49.481 SYMLINK libspdk_bdev_zone_block.so 00:04:49.481 CC module/bdev/raid/raid1.o 00:04:49.481 LIB libspdk_bdev_aio.a 00:04:49.481 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:49.481 SO libspdk_bdev_aio.so.5.0 00:04:49.481 CC module/bdev/raid/concat.o 00:04:49.481 SYMLINK libspdk_bdev_aio.so 00:04:49.481 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:49.481 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:49.738 LIB libspdk_bdev_iscsi.a 00:04:49.738 LIB libspdk_bdev_raid.a 00:04:49.738 SO libspdk_bdev_iscsi.so.5.0 00:04:49.739 SO libspdk_bdev_raid.so.5.0 00:04:49.739 SYMLINK libspdk_bdev_iscsi.so 00:04:49.739 SYMLINK libspdk_bdev_raid.so 00:04:49.739 LIB libspdk_bdev_virtio.a 00:04:49.997 SO libspdk_bdev_virtio.so.5.0 00:04:49.997 SYMLINK libspdk_bdev_virtio.so 00:04:50.255 LIB libspdk_bdev_nvme.a 00:04:50.255 SO libspdk_bdev_nvme.so.6.0 00:04:50.255 SYMLINK libspdk_bdev_nvme.so 00:04:50.822 CC module/event/subsystems/sock/sock.o 00:04:50.822 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.822 CC module/event/subsystems/vmd/vmd.o 00:04:50.822 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.822 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.822 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.822 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.822 LIB libspdk_event_sock.a 00:04:50.822 LIB libspdk_event_vhost_blk.a 00:04:50.822 LIB libspdk_event_vmd.a 00:04:50.822 SO libspdk_event_sock.so.4.0 00:04:50.822 SO libspdk_event_vhost_blk.so.2.0 00:04:50.822 SO libspdk_event_vmd.so.5.0 00:04:50.822 LIB libspdk_event_scheduler.a 00:04:50.822 SO libspdk_event_scheduler.so.3.0 00:04:50.822 SYMLINK libspdk_event_sock.so 00:04:50.822 SYMLINK libspdk_event_vhost_blk.so 00:04:50.822 LIB libspdk_event_iobuf.a 00:04:50.822 SYMLINK libspdk_event_vmd.so 00:04:50.822 SO libspdk_event_iobuf.so.2.0 00:04:50.822 SYMLINK libspdk_event_scheduler.so 00:04:50.822 SYMLINK libspdk_event_iobuf.so 00:04:51.080 CC module/event/subsystems/accel/accel.o 00:04:51.338 LIB libspdk_event_accel.a 00:04:51.338 SO libspdk_event_accel.so.5.0 00:04:51.338 SYMLINK libspdk_event_accel.so 00:04:51.597 CC module/event/subsystems/bdev/bdev.o 00:04:51.868 LIB libspdk_event_bdev.a 00:04:51.868 SO libspdk_event_bdev.so.5.0 00:04:51.868 SYMLINK libspdk_event_bdev.so 00:04:51.868 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:51.868 CC module/event/subsystems/scsi/scsi.o 00:04:51.868 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.157 CC module/event/subsystems/ublk/ublk.o 00:04:52.157 CC module/event/subsystems/nbd/nbd.o 00:04:52.157 LIB libspdk_event_ublk.a 00:04:52.157 LIB libspdk_event_scsi.a 00:04:52.157 LIB libspdk_event_nbd.a 00:04:52.157 SO libspdk_event_ublk.so.2.0 00:04:52.157 SO libspdk_event_scsi.so.5.0 00:04:52.157 SO libspdk_event_nbd.so.5.0 00:04:52.157 SYMLINK libspdk_event_ublk.so 00:04:52.431 SYMLINK libspdk_event_scsi.so 00:04:52.431 SYMLINK libspdk_event_nbd.so 00:04:52.431 LIB libspdk_event_nvmf.a 00:04:52.431 SO libspdk_event_nvmf.so.5.0 00:04:52.431 SYMLINK libspdk_event_nvmf.so 00:04:52.431 CC module/event/subsystems/iscsi/iscsi.o 00:04:52.431 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.690 LIB libspdk_event_vhost_scsi.a 00:04:52.690 LIB libspdk_event_iscsi.a 00:04:52.690 SO libspdk_event_vhost_scsi.so.2.0 00:04:52.690 SO libspdk_event_iscsi.so.5.0 00:04:52.690 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.690 SYMLINK libspdk_event_iscsi.so 00:04:52.948 SO libspdk.so.5.0 00:04:52.948 SYMLINK libspdk.so 00:04:52.948 CXX app/trace/trace.o 00:04:53.207 CC examples/sock/hello_world/hello_sock.o 00:04:53.207 CC examples/vmd/lsvmd/lsvmd.o 00:04:53.207 CC examples/nvme/hello_world/hello_world.o 00:04:53.207 CC examples/ioat/perf/perf.o 00:04:53.207 CC examples/accel/perf/accel_perf.o 00:04:53.207 CC examples/bdev/hello_world/hello_bdev.o 00:04:53.207 CC examples/nvmf/nvmf/nvmf.o 00:04:53.207 CC test/accel/dif/dif.o 00:04:53.207 CC examples/blob/hello_world/hello_blob.o 00:04:53.207 LINK lsvmd 00:04:53.466 LINK ioat_perf 00:04:53.466 LINK hello_sock 00:04:53.466 LINK hello_world 00:04:53.466 LINK hello_bdev 00:04:53.466 LINK hello_blob 00:04:53.466 CC examples/vmd/led/led.o 00:04:53.466 LINK nvmf 00:04:53.466 LINK spdk_trace 00:04:53.466 CC examples/ioat/verify/verify.o 00:04:53.466 CC examples/nvme/reconnect/reconnect.o 00:04:53.466 LINK accel_perf 00:04:53.466 LINK dif 00:04:53.725 LINK led 00:04:53.725 CC test/app/bdev_svc/bdev_svc.o 00:04:53.725 CC examples/blob/cli/blobcli.o 00:04:53.725 CC examples/bdev/bdevperf/bdevperf.o 00:04:53.725 CC app/trace_record/trace_record.o 00:04:53.725 LINK verify 00:04:53.725 CC test/bdev/bdevio/bdevio.o 00:04:53.725 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:53.725 LINK bdev_svc 00:04:53.983 CC test/blobfs/mkfs/mkfs.o 00:04:53.983 LINK reconnect 00:04:53.983 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:53.983 CC test/app/histogram_perf/histogram_perf.o 00:04:53.983 LINK spdk_trace_record 00:04:53.983 LINK mkfs 00:04:53.983 LINK histogram_perf 00:04:54.243 CC examples/util/zipf/zipf.o 00:04:54.243 LINK blobcli 00:04:54.243 CC examples/thread/thread/thread_ex.o 00:04:54.243 CC app/nvmf_tgt/nvmf_main.o 00:04:54.243 LINK bdevio 00:04:54.243 CC test/app/jsoncat/jsoncat.o 00:04:54.243 LINK nvme_manage 00:04:54.243 LINK zipf 00:04:54.243 LINK nvme_fuzz 00:04:54.243 TEST_HEADER include/spdk/accel.h 00:04:54.243 TEST_HEADER include/spdk/accel_module.h 00:04:54.243 TEST_HEADER include/spdk/assert.h 00:04:54.243 TEST_HEADER include/spdk/barrier.h 00:04:54.243 TEST_HEADER include/spdk/base64.h 00:04:54.243 TEST_HEADER include/spdk/bdev.h 00:04:54.243 TEST_HEADER include/spdk/bdev_module.h 00:04:54.243 TEST_HEADER include/spdk/bdev_zone.h 00:04:54.243 TEST_HEADER include/spdk/bit_array.h 00:04:54.243 TEST_HEADER include/spdk/bit_pool.h 00:04:54.243 TEST_HEADER include/spdk/blob_bdev.h 00:04:54.243 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:54.243 TEST_HEADER include/spdk/blobfs.h 00:04:54.243 TEST_HEADER include/spdk/blob.h 00:04:54.243 TEST_HEADER include/spdk/conf.h 00:04:54.243 TEST_HEADER include/spdk/config.h 00:04:54.243 TEST_HEADER include/spdk/cpuset.h 00:04:54.243 TEST_HEADER include/spdk/crc16.h 00:04:54.243 TEST_HEADER include/spdk/crc32.h 00:04:54.243 TEST_HEADER include/spdk/crc64.h 00:04:54.243 TEST_HEADER include/spdk/dif.h 00:04:54.243 TEST_HEADER include/spdk/dma.h 00:04:54.243 TEST_HEADER include/spdk/endian.h 00:04:54.243 TEST_HEADER include/spdk/env_dpdk.h 00:04:54.243 TEST_HEADER include/spdk/env.h 00:04:54.243 TEST_HEADER include/spdk/event.h 00:04:54.243 TEST_HEADER include/spdk/fd_group.h 00:04:54.243 TEST_HEADER include/spdk/fd.h 00:04:54.243 TEST_HEADER include/spdk/file.h 00:04:54.243 LINK nvmf_tgt 00:04:54.243 TEST_HEADER include/spdk/ftl.h 00:04:54.243 TEST_HEADER include/spdk/gpt_spec.h 00:04:54.243 TEST_HEADER include/spdk/hexlify.h 00:04:54.243 TEST_HEADER include/spdk/histogram_data.h 00:04:54.243 TEST_HEADER include/spdk/idxd.h 00:04:54.243 TEST_HEADER include/spdk/idxd_spec.h 00:04:54.243 TEST_HEADER include/spdk/init.h 00:04:54.243 TEST_HEADER include/spdk/ioat.h 00:04:54.502 TEST_HEADER include/spdk/ioat_spec.h 00:04:54.502 TEST_HEADER include/spdk/iscsi_spec.h 00:04:54.502 LINK bdevperf 00:04:54.502 TEST_HEADER include/spdk/json.h 00:04:54.502 TEST_HEADER include/spdk/jsonrpc.h 00:04:54.502 TEST_HEADER include/spdk/likely.h 00:04:54.502 TEST_HEADER include/spdk/log.h 00:04:54.502 LINK thread 00:04:54.502 TEST_HEADER include/spdk/lvol.h 00:04:54.502 TEST_HEADER include/spdk/memory.h 00:04:54.502 TEST_HEADER include/spdk/mmio.h 00:04:54.502 TEST_HEADER include/spdk/nbd.h 00:04:54.502 TEST_HEADER include/spdk/notify.h 00:04:54.502 TEST_HEADER include/spdk/nvme.h 00:04:54.502 TEST_HEADER include/spdk/nvme_intel.h 00:04:54.502 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:54.502 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:54.502 TEST_HEADER include/spdk/nvme_spec.h 00:04:54.502 TEST_HEADER include/spdk/nvme_zns.h 00:04:54.502 LINK jsoncat 00:04:54.502 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:54.502 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:54.502 TEST_HEADER include/spdk/nvmf.h 00:04:54.502 TEST_HEADER include/spdk/nvmf_spec.h 00:04:54.502 CC examples/idxd/perf/perf.o 00:04:54.502 TEST_HEADER include/spdk/nvmf_transport.h 00:04:54.502 TEST_HEADER include/spdk/opal.h 00:04:54.502 TEST_HEADER include/spdk/opal_spec.h 00:04:54.502 TEST_HEADER include/spdk/pci_ids.h 00:04:54.502 TEST_HEADER include/spdk/pipe.h 00:04:54.502 TEST_HEADER include/spdk/queue.h 00:04:54.502 TEST_HEADER include/spdk/reduce.h 00:04:54.502 TEST_HEADER include/spdk/rpc.h 00:04:54.502 TEST_HEADER include/spdk/scheduler.h 00:04:54.502 TEST_HEADER include/spdk/scsi.h 00:04:54.502 TEST_HEADER include/spdk/scsi_spec.h 00:04:54.502 TEST_HEADER include/spdk/sock.h 00:04:54.502 TEST_HEADER include/spdk/stdinc.h 00:04:54.502 TEST_HEADER include/spdk/string.h 00:04:54.502 TEST_HEADER include/spdk/thread.h 00:04:54.502 TEST_HEADER include/spdk/trace.h 00:04:54.502 TEST_HEADER include/spdk/trace_parser.h 00:04:54.502 TEST_HEADER include/spdk/tree.h 00:04:54.502 TEST_HEADER include/spdk/ublk.h 00:04:54.502 TEST_HEADER include/spdk/util.h 00:04:54.502 TEST_HEADER include/spdk/uuid.h 00:04:54.502 CC examples/nvme/arbitration/arbitration.o 00:04:54.502 TEST_HEADER include/spdk/version.h 00:04:54.502 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:54.502 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:54.502 TEST_HEADER include/spdk/vhost.h 00:04:54.502 TEST_HEADER include/spdk/vmd.h 00:04:54.502 TEST_HEADER include/spdk/xor.h 00:04:54.502 TEST_HEADER include/spdk/zipf.h 00:04:54.502 CXX test/cpp_headers/accel.o 00:04:54.502 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:54.502 CC test/dma/test_dma/test_dma.o 00:04:54.502 CXX test/cpp_headers/accel_module.o 00:04:54.502 CXX test/cpp_headers/assert.o 00:04:54.502 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.760 CC app/iscsi_tgt/iscsi_tgt.o 00:04:54.760 CC test/event/event_perf/event_perf.o 00:04:54.760 CC test/event/reactor/reactor.o 00:04:54.760 CXX test/cpp_headers/barrier.o 00:04:54.760 LINK idxd_perf 00:04:54.760 CC test/event/reactor_perf/reactor_perf.o 00:04:54.760 LINK arbitration 00:04:54.760 LINK reactor 00:04:54.761 LINK iscsi_tgt 00:04:54.761 LINK event_perf 00:04:54.761 LINK test_dma 00:04:54.761 CXX test/cpp_headers/base64.o 00:04:55.018 LINK reactor_perf 00:04:55.018 CC test/event/app_repeat/app_repeat.o 00:04:55.018 CXX test/cpp_headers/bdev.o 00:04:55.018 CC examples/nvme/hotplug/hotplug.o 00:04:55.018 CXX test/cpp_headers/bdev_module.o 00:04:55.018 CC test/env/vtophys/vtophys.o 00:04:55.018 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:55.018 LINK app_repeat 00:04:55.018 CC test/env/memory/memory_ut.o 00:04:55.018 CC app/spdk_tgt/spdk_tgt.o 00:04:55.018 LINK mem_callbacks 00:04:55.276 CXX test/cpp_headers/bdev_zone.o 00:04:55.276 LINK hotplug 00:04:55.276 CC test/event/scheduler/scheduler.o 00:04:55.276 CXX test/cpp_headers/bit_array.o 00:04:55.276 LINK vtophys 00:04:55.276 LINK env_dpdk_post_init 00:04:55.276 CXX test/cpp_headers/bit_pool.o 00:04:55.276 CXX test/cpp_headers/blob_bdev.o 00:04:55.276 LINK spdk_tgt 00:04:55.533 CC app/spdk_lspci/spdk_lspci.o 00:04:55.533 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.533 LINK scheduler 00:04:55.533 CC app/spdk_nvme_perf/perf.o 00:04:55.533 CC test/nvme/aer/aer.o 00:04:55.533 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.533 CC test/lvol/esnap/esnap.o 00:04:55.534 LINK spdk_lspci 00:04:55.534 LINK cmb_copy 00:04:55.534 CC test/nvme/reset/reset.o 00:04:55.792 CC test/env/pci/pci_ut.o 00:04:55.792 CXX test/cpp_headers/blobfs.o 00:04:55.792 CC examples/nvme/abort/abort.o 00:04:55.792 LINK aer 00:04:55.792 CXX test/cpp_headers/blob.o 00:04:55.792 LINK reset 00:04:56.050 LINK memory_ut 00:04:56.050 LINK iscsi_fuzz 00:04:56.050 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:56.050 CXX test/cpp_headers/conf.o 00:04:56.050 LINK pci_ut 00:04:56.050 CC test/nvme/sgl/sgl.o 00:04:56.050 LINK abort 00:04:56.309 CC test/nvme/e2edp/nvme_dp.o 00:04:56.309 LINK interrupt_tgt 00:04:56.309 CXX test/cpp_headers/config.o 00:04:56.309 LINK spdk_nvme_perf 00:04:56.309 CXX test/cpp_headers/cpuset.o 00:04:56.309 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:56.309 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.309 CC test/rpc_client/rpc_client_test.o 00:04:56.309 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:56.309 LINK sgl 00:04:56.310 CC app/spdk_nvme_identify/identify.o 00:04:56.310 CXX test/cpp_headers/crc16.o 00:04:56.310 LINK nvme_dp 00:04:56.568 CC test/nvme/overhead/overhead.o 00:04:56.568 LINK pmr_persistence 00:04:56.568 LINK rpc_client_test 00:04:56.568 CXX test/cpp_headers/crc32.o 00:04:56.568 CC test/app/stub/stub.o 00:04:56.568 CXX test/cpp_headers/crc64.o 00:04:56.568 CXX test/cpp_headers/dif.o 00:04:56.568 CXX test/cpp_headers/dma.o 00:04:56.827 LINK stub 00:04:56.827 LINK vhost_fuzz 00:04:56.827 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.827 LINK overhead 00:04:56.827 CXX test/cpp_headers/endian.o 00:04:56.827 CC app/spdk_top/spdk_top.o 00:04:56.827 CC app/vhost/vhost.o 00:04:57.085 CC app/spdk_dd/spdk_dd.o 00:04:57.085 LINK spdk_nvme_discover 00:04:57.085 CXX test/cpp_headers/env_dpdk.o 00:04:57.085 CC test/nvme/err_injection/err_injection.o 00:04:57.085 CC app/fio/nvme/fio_plugin.o 00:04:57.085 LINK vhost 00:04:57.085 LINK spdk_nvme_identify 00:04:57.085 CXX test/cpp_headers/env.o 00:04:57.344 CC test/nvme/startup/startup.o 00:04:57.344 LINK err_injection 00:04:57.344 CXX test/cpp_headers/event.o 00:04:57.344 CXX test/cpp_headers/fd_group.o 00:04:57.344 LINK spdk_dd 00:04:57.344 CXX test/cpp_headers/fd.o 00:04:57.344 CC app/fio/bdev/fio_plugin.o 00:04:57.344 LINK startup 00:04:57.344 CXX test/cpp_headers/file.o 00:04:57.602 LINK spdk_nvme 00:04:57.602 CC test/nvme/reserve/reserve.o 00:04:57.602 CC test/nvme/simple_copy/simple_copy.o 00:04:57.602 CC test/thread/poller_perf/poller_perf.o 00:04:57.602 CXX test/cpp_headers/ftl.o 00:04:57.602 CC test/nvme/connect_stress/connect_stress.o 00:04:57.602 LINK spdk_top 00:04:57.602 CXX test/cpp_headers/gpt_spec.o 00:04:57.861 LINK poller_perf 00:04:57.861 CXX test/cpp_headers/hexlify.o 00:04:57.861 LINK connect_stress 00:04:57.861 LINK reserve 00:04:57.861 LINK simple_copy 00:04:57.861 CC test/nvme/boot_partition/boot_partition.o 00:04:57.861 CXX test/cpp_headers/histogram_data.o 00:04:57.861 LINK spdk_bdev 00:04:57.861 CC test/nvme/compliance/nvme_compliance.o 00:04:57.861 CXX test/cpp_headers/idxd.o 00:04:57.861 CXX test/cpp_headers/idxd_spec.o 00:04:57.861 CXX test/cpp_headers/init.o 00:04:57.861 CXX test/cpp_headers/ioat.o 00:04:58.119 LINK boot_partition 00:04:58.119 CC test/nvme/fused_ordering/fused_ordering.o 00:04:58.119 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:58.119 CXX test/cpp_headers/ioat_spec.o 00:04:58.119 CXX test/cpp_headers/iscsi_spec.o 00:04:58.119 CXX test/cpp_headers/json.o 00:04:58.119 CXX test/cpp_headers/jsonrpc.o 00:04:58.119 LINK nvme_compliance 00:04:58.378 CC test/nvme/fdp/fdp.o 00:04:58.378 LINK fused_ordering 00:04:58.378 CXX test/cpp_headers/likely.o 00:04:58.378 LINK doorbell_aers 00:04:58.378 CXX test/cpp_headers/log.o 00:04:58.378 CXX test/cpp_headers/lvol.o 00:04:58.378 CC test/nvme/cuse/cuse.o 00:04:58.378 CXX test/cpp_headers/memory.o 00:04:58.378 CXX test/cpp_headers/mmio.o 00:04:58.637 CXX test/cpp_headers/nbd.o 00:04:58.637 CXX test/cpp_headers/notify.o 00:04:58.637 CXX test/cpp_headers/nvme.o 00:04:58.637 CXX test/cpp_headers/nvme_intel.o 00:04:58.637 LINK fdp 00:04:58.637 CXX test/cpp_headers/nvme_ocssd.o 00:04:58.637 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.637 CXX test/cpp_headers/nvme_spec.o 00:04:58.637 CXX test/cpp_headers/nvme_zns.o 00:04:58.637 CXX test/cpp_headers/nvmf_cmd.o 00:04:58.897 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:58.897 CXX test/cpp_headers/nvmf.o 00:04:58.897 CXX test/cpp_headers/nvmf_spec.o 00:04:58.897 CXX test/cpp_headers/nvmf_transport.o 00:04:58.897 CXX test/cpp_headers/opal.o 00:04:58.897 CXX test/cpp_headers/opal_spec.o 00:04:58.897 CXX test/cpp_headers/pci_ids.o 00:04:58.897 CXX test/cpp_headers/pipe.o 00:04:59.156 CXX test/cpp_headers/queue.o 00:04:59.156 CXX test/cpp_headers/reduce.o 00:04:59.156 CXX test/cpp_headers/rpc.o 00:04:59.156 CXX test/cpp_headers/scheduler.o 00:04:59.156 CXX test/cpp_headers/scsi.o 00:04:59.156 CXX test/cpp_headers/scsi_spec.o 00:04:59.156 CXX test/cpp_headers/sock.o 00:04:59.156 CXX test/cpp_headers/stdinc.o 00:04:59.156 CXX test/cpp_headers/string.o 00:04:59.156 CXX test/cpp_headers/thread.o 00:04:59.156 CXX test/cpp_headers/trace.o 00:04:59.415 CXX test/cpp_headers/trace_parser.o 00:04:59.415 CXX test/cpp_headers/tree.o 00:04:59.415 CXX test/cpp_headers/ublk.o 00:04:59.415 CXX test/cpp_headers/util.o 00:04:59.415 CXX test/cpp_headers/uuid.o 00:04:59.415 CXX test/cpp_headers/version.o 00:04:59.415 CXX test/cpp_headers/vfio_user_pci.o 00:04:59.415 CXX test/cpp_headers/vfio_user_spec.o 00:04:59.415 CXX test/cpp_headers/vhost.o 00:04:59.415 CXX test/cpp_headers/vmd.o 00:04:59.672 CXX test/cpp_headers/xor.o 00:04:59.672 LINK cuse 00:04:59.672 CXX test/cpp_headers/zipf.o 00:04:59.930 LINK esnap 00:05:02.462 00:05:02.462 real 0m49.146s 00:05:02.462 user 4m36.429s 00:05:02.462 sys 1m4.020s 00:05:02.462 14:11:07 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:02.462 14:11:07 -- common/autotest_common.sh@10 -- $ set +x 00:05:02.462 ************************************ 00:05:02.462 END TEST make 00:05:02.462 ************************************ 00:05:02.462 14:11:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.462 14:11:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.462 14:11:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.462 14:11:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.462 14:11:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.462 14:11:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.462 14:11:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.462 14:11:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.462 14:11:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.462 14:11:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.462 14:11:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.462 14:11:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.462 14:11:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.462 14:11:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.462 14:11:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.462 14:11:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.462 14:11:07 -- scripts/common.sh@344 -- # : 1 00:05:02.462 14:11:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.462 14:11:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.462 14:11:07 -- scripts/common.sh@364 -- # decimal 1 00:05:02.462 14:11:07 -- scripts/common.sh@352 -- # local d=1 00:05:02.462 14:11:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.462 14:11:07 -- scripts/common.sh@354 -- # echo 1 00:05:02.462 14:11:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.462 14:11:07 -- scripts/common.sh@365 -- # decimal 2 00:05:02.462 14:11:07 -- scripts/common.sh@352 -- # local d=2 00:05:02.462 14:11:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.462 14:11:07 -- scripts/common.sh@354 -- # echo 2 00:05:02.462 14:11:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.462 14:11:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.462 14:11:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.462 14:11:07 -- scripts/common.sh@367 -- # return 0 00:05:02.462 14:11:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.462 14:11:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.462 --rc genhtml_branch_coverage=1 00:05:02.462 --rc genhtml_function_coverage=1 00:05:02.462 --rc genhtml_legend=1 00:05:02.462 --rc geninfo_all_blocks=1 00:05:02.462 --rc geninfo_unexecuted_blocks=1 00:05:02.462 00:05:02.462 ' 00:05:02.462 14:11:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.462 --rc genhtml_branch_coverage=1 00:05:02.462 --rc genhtml_function_coverage=1 00:05:02.462 --rc genhtml_legend=1 00:05:02.462 --rc geninfo_all_blocks=1 00:05:02.462 --rc geninfo_unexecuted_blocks=1 00:05:02.462 00:05:02.462 ' 00:05:02.462 14:11:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.462 --rc genhtml_branch_coverage=1 00:05:02.462 --rc genhtml_function_coverage=1 00:05:02.462 --rc genhtml_legend=1 00:05:02.462 --rc geninfo_all_blocks=1 00:05:02.462 --rc geninfo_unexecuted_blocks=1 00:05:02.462 00:05:02.462 ' 00:05:02.462 14:11:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.462 --rc genhtml_branch_coverage=1 00:05:02.462 --rc genhtml_function_coverage=1 00:05:02.462 --rc genhtml_legend=1 00:05:02.462 --rc geninfo_all_blocks=1 00:05:02.462 --rc geninfo_unexecuted_blocks=1 00:05:02.462 00:05:02.462 ' 00:05:02.462 14:11:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.462 14:11:07 -- nvmf/common.sh@7 -- # uname -s 00:05:02.462 14:11:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.462 14:11:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.462 14:11:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.462 14:11:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.462 14:11:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.462 14:11:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.462 14:11:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.462 14:11:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.462 14:11:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.462 14:11:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.462 14:11:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:05:02.462 14:11:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:05:02.462 14:11:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.462 14:11:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.462 14:11:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:02.463 14:11:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.463 14:11:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.463 14:11:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.463 14:11:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.463 14:11:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.463 14:11:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.463 14:11:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.463 14:11:07 -- paths/export.sh@5 -- # export PATH 00:05:02.463 14:11:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.463 14:11:07 -- nvmf/common.sh@46 -- # : 0 00:05:02.463 14:11:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:02.463 14:11:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:02.463 14:11:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:02.463 14:11:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.463 14:11:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.463 14:11:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:02.463 14:11:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:02.463 14:11:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:02.463 14:11:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:02.463 14:11:07 -- spdk/autotest.sh@32 -- # uname -s 00:05:02.463 14:11:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:02.463 14:11:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:02.463 14:11:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.463 14:11:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:02.463 14:11:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.463 14:11:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:02.463 14:11:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:02.463 14:11:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:02.463 14:11:07 -- spdk/autotest.sh@48 -- # udevadm_pid=61817 00:05:02.463 14:11:07 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:02.463 14:11:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:02.463 14:11:07 -- spdk/autotest.sh@54 -- # echo 61819 00:05:02.463 14:11:07 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:02.463 14:11:07 -- spdk/autotest.sh@56 -- # echo 61823 00:05:02.463 14:11:07 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:02.463 14:11:07 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:05:02.463 14:11:07 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.463 14:11:07 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:02.463 14:11:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.463 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.463 14:11:07 -- spdk/autotest.sh@70 -- # create_test_list 00:05:02.463 14:11:07 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:02.463 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:02.463 14:11:08 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.463 14:11:08 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.463 14:11:08 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.463 14:11:08 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.463 14:11:08 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.463 14:11:08 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:02.463 14:11:08 -- common/autotest_common.sh@1450 -- # uname 00:05:02.463 14:11:08 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:05:02.463 14:11:08 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:02.463 14:11:08 -- common/autotest_common.sh@1470 -- # uname 00:05:02.463 14:11:08 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:05:02.463 14:11:08 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:05:02.463 14:11:08 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.722 lcov: LCOV version 1.15 00:05:02.722 14:11:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:10.832 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:10.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:10.832 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:10.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:10.832 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:10.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:28.913 14:11:32 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:28.913 14:11:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.913 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:05:28.913 14:11:32 -- spdk/autotest.sh@89 -- # rm -f 00:05:28.913 14:11:32 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.913 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:28.913 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:28.913 14:11:33 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:28.913 14:11:33 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:28.913 14:11:33 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:28.913 14:11:33 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:28.913 14:11:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.913 14:11:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:28.913 14:11:33 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:28.913 14:11:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.913 14:11:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:28.913 14:11:33 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:28.913 14:11:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.913 14:11:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:28.913 14:11:33 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:28.913 14:11:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.913 14:11:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:28.913 14:11:33 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:28.913 14:11:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:28.913 14:11:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.913 14:11:33 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:28.913 14:11:33 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:28.913 14:11:33 -- spdk/autotest.sh@108 -- # grep -v p 00:05:28.913 14:11:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:28.913 14:11:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:28.913 14:11:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:28.913 14:11:33 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:28.913 14:11:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:28.913 No valid GPT data, bailing 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # pt= 00:05:28.913 14:11:33 -- scripts/common.sh@394 -- # return 1 00:05:28.913 14:11:33 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:28.913 1+0 records in 00:05:28.913 1+0 records out 00:05:28.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534411 s, 196 MB/s 00:05:28.913 14:11:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:28.913 14:11:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:28.913 14:11:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:28.913 14:11:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:28.913 14:11:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:28.913 No valid GPT data, bailing 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # pt= 00:05:28.913 14:11:33 -- scripts/common.sh@394 -- # return 1 00:05:28.913 14:11:33 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:28.913 1+0 records in 00:05:28.913 1+0 records out 00:05:28.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485705 s, 216 MB/s 00:05:28.913 14:11:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:28.913 14:11:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:28.913 14:11:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:28.913 14:11:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:28.913 14:11:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:28.913 No valid GPT data, bailing 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # pt= 00:05:28.913 14:11:33 -- scripts/common.sh@394 -- # return 1 00:05:28.913 14:11:33 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:28.913 1+0 records in 00:05:28.913 1+0 records out 00:05:28.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037 s, 283 MB/s 00:05:28.913 14:11:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:28.913 14:11:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:28.913 14:11:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:28.913 14:11:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:28.913 14:11:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:28.913 No valid GPT data, bailing 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:28.913 14:11:33 -- scripts/common.sh@393 -- # pt= 00:05:28.913 14:11:33 -- scripts/common.sh@394 -- # return 1 00:05:28.913 14:11:33 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:28.913 1+0 records in 00:05:28.913 1+0 records out 00:05:28.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042252 s, 248 MB/s 00:05:28.913 14:11:33 -- spdk/autotest.sh@116 -- # sync 00:05:28.913 14:11:33 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:28.913 14:11:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:28.913 14:11:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.289 14:11:35 -- spdk/autotest.sh@122 -- # uname -s 00:05:30.289 14:11:35 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:30.289 14:11:35 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:30.289 14:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.289 14:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.289 14:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.289 ************************************ 00:05:30.289 START TEST setup.sh 00:05:30.289 ************************************ 00:05:30.289 14:11:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:30.289 * Looking for test storage... 00:05:30.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.289 14:11:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.289 14:11:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.289 14:11:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.548 14:11:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.548 14:11:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.548 14:11:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.548 14:11:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.548 14:11:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.548 14:11:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.548 14:11:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.548 14:11:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.548 14:11:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.548 14:11:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.548 14:11:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.548 14:11:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.548 14:11:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.548 14:11:36 -- scripts/common.sh@344 -- # : 1 00:05:30.548 14:11:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.548 14:11:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.548 14:11:36 -- scripts/common.sh@364 -- # decimal 1 00:05:30.548 14:11:36 -- scripts/common.sh@352 -- # local d=1 00:05:30.548 14:11:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.548 14:11:36 -- scripts/common.sh@354 -- # echo 1 00:05:30.548 14:11:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.548 14:11:36 -- scripts/common.sh@365 -- # decimal 2 00:05:30.548 14:11:36 -- scripts/common.sh@352 -- # local d=2 00:05:30.548 14:11:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.548 14:11:36 -- scripts/common.sh@354 -- # echo 2 00:05:30.548 14:11:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.548 14:11:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.548 14:11:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.548 14:11:36 -- scripts/common.sh@367 -- # return 0 00:05:30.548 14:11:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.548 14:11:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.548 --rc genhtml_branch_coverage=1 00:05:30.548 --rc genhtml_function_coverage=1 00:05:30.548 --rc genhtml_legend=1 00:05:30.548 --rc geninfo_all_blocks=1 00:05:30.548 --rc geninfo_unexecuted_blocks=1 00:05:30.548 00:05:30.548 ' 00:05:30.548 14:11:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.548 --rc genhtml_branch_coverage=1 00:05:30.548 --rc genhtml_function_coverage=1 00:05:30.548 --rc genhtml_legend=1 00:05:30.548 --rc geninfo_all_blocks=1 00:05:30.548 --rc geninfo_unexecuted_blocks=1 00:05:30.548 00:05:30.548 ' 00:05:30.548 14:11:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.548 --rc genhtml_branch_coverage=1 00:05:30.548 --rc genhtml_function_coverage=1 00:05:30.548 --rc genhtml_legend=1 00:05:30.548 --rc geninfo_all_blocks=1 00:05:30.548 --rc geninfo_unexecuted_blocks=1 00:05:30.548 00:05:30.548 ' 00:05:30.548 14:11:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.548 --rc genhtml_branch_coverage=1 00:05:30.548 --rc genhtml_function_coverage=1 00:05:30.548 --rc genhtml_legend=1 00:05:30.548 --rc geninfo_all_blocks=1 00:05:30.548 --rc geninfo_unexecuted_blocks=1 00:05:30.548 00:05:30.548 ' 00:05:30.548 14:11:36 -- setup/test-setup.sh@10 -- # uname -s 00:05:30.548 14:11:36 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:30.548 14:11:36 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:30.548 14:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.548 14:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.548 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.548 ************************************ 00:05:30.548 START TEST acl 00:05:30.548 ************************************ 00:05:30.548 14:11:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:30.548 * Looking for test storage... 00:05:30.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.548 14:11:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.548 14:11:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.548 14:11:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.807 14:11:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.807 14:11:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.807 14:11:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.807 14:11:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.807 14:11:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.807 14:11:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.807 14:11:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.807 14:11:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.807 14:11:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.807 14:11:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.807 14:11:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.807 14:11:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.807 14:11:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.807 14:11:36 -- scripts/common.sh@344 -- # : 1 00:05:30.807 14:11:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.807 14:11:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.807 14:11:36 -- scripts/common.sh@364 -- # decimal 1 00:05:30.807 14:11:36 -- scripts/common.sh@352 -- # local d=1 00:05:30.807 14:11:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.807 14:11:36 -- scripts/common.sh@354 -- # echo 1 00:05:30.807 14:11:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.807 14:11:36 -- scripts/common.sh@365 -- # decimal 2 00:05:30.807 14:11:36 -- scripts/common.sh@352 -- # local d=2 00:05:30.807 14:11:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.807 14:11:36 -- scripts/common.sh@354 -- # echo 2 00:05:30.807 14:11:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.807 14:11:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.807 14:11:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.807 14:11:36 -- scripts/common.sh@367 -- # return 0 00:05:30.807 14:11:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.807 14:11:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.807 --rc genhtml_branch_coverage=1 00:05:30.807 --rc genhtml_function_coverage=1 00:05:30.807 --rc genhtml_legend=1 00:05:30.807 --rc geninfo_all_blocks=1 00:05:30.807 --rc geninfo_unexecuted_blocks=1 00:05:30.807 00:05:30.807 ' 00:05:30.807 14:11:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.807 --rc genhtml_branch_coverage=1 00:05:30.807 --rc genhtml_function_coverage=1 00:05:30.807 --rc genhtml_legend=1 00:05:30.807 --rc geninfo_all_blocks=1 00:05:30.807 --rc geninfo_unexecuted_blocks=1 00:05:30.807 00:05:30.807 ' 00:05:30.807 14:11:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.807 --rc genhtml_branch_coverage=1 00:05:30.807 --rc genhtml_function_coverage=1 00:05:30.807 --rc genhtml_legend=1 00:05:30.807 --rc geninfo_all_blocks=1 00:05:30.807 --rc geninfo_unexecuted_blocks=1 00:05:30.807 00:05:30.807 ' 00:05:30.807 14:11:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.807 --rc genhtml_branch_coverage=1 00:05:30.807 --rc genhtml_function_coverage=1 00:05:30.807 --rc genhtml_legend=1 00:05:30.807 --rc geninfo_all_blocks=1 00:05:30.807 --rc geninfo_unexecuted_blocks=1 00:05:30.807 00:05:30.807 ' 00:05:30.807 14:11:36 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:30.807 14:11:36 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:30.807 14:11:36 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:30.807 14:11:36 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:30.807 14:11:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:30.807 14:11:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:30.807 14:11:36 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:30.807 14:11:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:30.807 14:11:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:30.807 14:11:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:30.807 14:11:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:30.807 14:11:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:30.807 14:11:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:30.807 14:11:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:30.807 14:11:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:30.807 14:11:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:30.807 14:11:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:30.807 14:11:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:30.807 14:11:36 -- setup/acl.sh@12 -- # devs=() 00:05:30.807 14:11:36 -- setup/acl.sh@12 -- # declare -a devs 00:05:30.807 14:11:36 -- setup/acl.sh@13 -- # drivers=() 00:05:30.807 14:11:36 -- setup/acl.sh@13 -- # declare -A drivers 00:05:30.807 14:11:36 -- setup/acl.sh@51 -- # setup reset 00:05:30.807 14:11:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.807 14:11:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.374 14:11:37 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:31.374 14:11:37 -- setup/acl.sh@16 -- # local dev driver 00:05:31.633 14:11:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.633 14:11:37 -- setup/acl.sh@15 -- # setup output status 00:05:31.633 14:11:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.633 14:11:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.633 Hugepages 00:05:31.633 node hugesize free / total 00:05:31.633 14:11:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:31.633 14:11:37 -- setup/acl.sh@19 -- # continue 00:05:31.633 14:11:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.633 00:05:31.633 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.633 14:11:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:31.633 14:11:37 -- setup/acl.sh@19 -- # continue 00:05:31.633 14:11:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.633 14:11:37 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:31.633 14:11:37 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:31.633 14:11:37 -- setup/acl.sh@20 -- # continue 00:05:31.633 14:11:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.892 14:11:37 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:31.892 14:11:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.892 14:11:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:31.892 14:11:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.892 14:11:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.892 14:11:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.892 14:11:37 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:31.892 14:11:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:31.892 14:11:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:31.892 14:11:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:31.892 14:11:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:31.892 14:11:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.892 14:11:37 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:31.892 14:11:37 -- setup/acl.sh@54 -- # run_test denied denied 00:05:31.892 14:11:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.892 14:11:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.892 14:11:37 -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 ************************************ 00:05:31.892 START TEST denied 00:05:31.892 ************************************ 00:05:31.892 14:11:37 -- common/autotest_common.sh@1114 -- # denied 00:05:31.892 14:11:37 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:31.892 14:11:37 -- setup/acl.sh@38 -- # setup output config 00:05:31.892 14:11:37 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:31.892 14:11:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.892 14:11:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.829 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:32.829 14:11:38 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:32.829 14:11:38 -- setup/acl.sh@28 -- # local dev driver 00:05:32.829 14:11:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:32.829 14:11:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:32.829 14:11:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:32.829 14:11:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:32.829 14:11:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:32.829 14:11:38 -- setup/acl.sh@41 -- # setup reset 00:05:32.829 14:11:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.829 14:11:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.397 ************************************ 00:05:33.397 END TEST denied 00:05:33.397 ************************************ 00:05:33.397 00:05:33.397 real 0m1.571s 00:05:33.397 user 0m0.609s 00:05:33.397 sys 0m0.942s 00:05:33.397 14:11:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.397 14:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:33.656 14:11:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:33.656 14:11:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.656 14:11:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.656 14:11:39 -- common/autotest_common.sh@10 -- # set +x 00:05:33.656 ************************************ 00:05:33.656 START TEST allowed 00:05:33.656 ************************************ 00:05:33.656 14:11:39 -- common/autotest_common.sh@1114 -- # allowed 00:05:33.656 14:11:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.656 14:11:39 -- setup/acl.sh@45 -- # setup output config 00:05:33.656 14:11:39 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:33.656 14:11:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.656 14:11:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.590 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.590 14:11:39 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:34.590 14:11:39 -- setup/acl.sh@28 -- # local dev driver 00:05:34.590 14:11:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:34.590 14:11:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:34.590 14:11:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:34.590 14:11:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:34.590 14:11:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:34.590 14:11:39 -- setup/acl.sh@48 -- # setup reset 00:05:34.590 14:11:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.590 14:11:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.158 ************************************ 00:05:35.158 END TEST allowed 00:05:35.158 ************************************ 00:05:35.158 00:05:35.158 real 0m1.664s 00:05:35.158 user 0m0.737s 00:05:35.158 sys 0m0.917s 00:05:35.158 14:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.158 14:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:35.158 ************************************ 00:05:35.158 END TEST acl 00:05:35.158 ************************************ 00:05:35.158 00:05:35.158 real 0m4.751s 00:05:35.158 user 0m2.053s 00:05:35.158 sys 0m2.692s 00:05:35.158 14:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.158 14:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:35.418 14:11:40 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:35.418 14:11:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.418 14:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.418 14:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:35.418 ************************************ 00:05:35.418 START TEST hugepages 00:05:35.418 ************************************ 00:05:35.418 14:11:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:35.418 * Looking for test storage... 00:05:35.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:35.418 14:11:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.418 14:11:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.418 14:11:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.418 14:11:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.418 14:11:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.418 14:11:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.418 14:11:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.418 14:11:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.418 14:11:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.418 14:11:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.418 14:11:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.418 14:11:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.418 14:11:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.418 14:11:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.418 14:11:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.418 14:11:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.418 14:11:41 -- scripts/common.sh@344 -- # : 1 00:05:35.418 14:11:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.418 14:11:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.418 14:11:41 -- scripts/common.sh@364 -- # decimal 1 00:05:35.418 14:11:41 -- scripts/common.sh@352 -- # local d=1 00:05:35.418 14:11:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.418 14:11:41 -- scripts/common.sh@354 -- # echo 1 00:05:35.418 14:11:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.418 14:11:41 -- scripts/common.sh@365 -- # decimal 2 00:05:35.418 14:11:41 -- scripts/common.sh@352 -- # local d=2 00:05:35.418 14:11:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.418 14:11:41 -- scripts/common.sh@354 -- # echo 2 00:05:35.418 14:11:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.418 14:11:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.418 14:11:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.418 14:11:41 -- scripts/common.sh@367 -- # return 0 00:05:35.418 14:11:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.418 14:11:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 14:11:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 14:11:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 14:11:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 14:11:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:35.418 14:11:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:35.418 14:11:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:35.418 14:11:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:35.418 14:11:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:35.418 14:11:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:35.418 14:11:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:35.418 14:11:41 -- setup/common.sh@18 -- # local node= 00:05:35.418 14:11:41 -- setup/common.sh@19 -- # local var val 00:05:35.418 14:11:41 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.418 14:11:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.418 14:11:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.418 14:11:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.418 14:11:41 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.418 14:11:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.418 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 4430792 kB' 'MemAvailable: 7360684 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 496064 kB' 'Inactive: 2753460 kB' 'Active(anon): 126896 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118100 kB' 'Mapped: 51244 kB' 'Shmem: 10512 kB' 'KReclaimable: 88420 kB' 'Slab: 191784 kB' 'SReclaimable: 88420 kB' 'SUnreclaim: 103364 kB' 'KernelStack: 6720 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 326868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.419 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.419 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # continue 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.679 14:11:41 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.679 14:11:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.679 14:11:41 -- setup/common.sh@33 -- # echo 2048 00:05:35.679 14:11:41 -- setup/common.sh@33 -- # return 0 00:05:35.679 14:11:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:35.679 14:11:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:35.679 14:11:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:35.679 14:11:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:35.679 14:11:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:35.679 14:11:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:35.679 14:11:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:35.679 14:11:41 -- setup/hugepages.sh@207 -- # get_nodes 00:05:35.679 14:11:41 -- setup/hugepages.sh@27 -- # local node 00:05:35.679 14:11:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.679 14:11:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:35.679 14:11:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.679 14:11:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.679 14:11:41 -- setup/hugepages.sh@208 -- # clear_hp 00:05:35.679 14:11:41 -- setup/hugepages.sh@37 -- # local node hp 00:05:35.679 14:11:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:35.679 14:11:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:35.679 14:11:41 -- setup/hugepages.sh@41 -- # echo 0 00:05:35.679 14:11:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:35.679 14:11:41 -- setup/hugepages.sh@41 -- # echo 0 00:05:35.679 14:11:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:35.679 14:11:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:35.679 14:11:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:35.679 14:11:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.679 14:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.679 14:11:41 -- common/autotest_common.sh@10 -- # set +x 00:05:35.679 ************************************ 00:05:35.679 START TEST default_setup 00:05:35.679 ************************************ 00:05:35.679 14:11:41 -- common/autotest_common.sh@1114 -- # default_setup 00:05:35.679 14:11:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:35.679 14:11:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:35.679 14:11:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:35.680 14:11:41 -- setup/hugepages.sh@51 -- # shift 00:05:35.680 14:11:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:35.680 14:11:41 -- setup/hugepages.sh@52 -- # local node_ids 00:05:35.680 14:11:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:35.680 14:11:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:35.680 14:11:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:35.680 14:11:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:35.680 14:11:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.680 14:11:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:35.680 14:11:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:35.680 14:11:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.680 14:11:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.680 14:11:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:35.680 14:11:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:35.680 14:11:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:35.680 14:11:41 -- setup/hugepages.sh@73 -- # return 0 00:05:35.680 14:11:41 -- setup/hugepages.sh@137 -- # setup output 00:05:35.680 14:11:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.680 14:11:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.508 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.508 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.508 14:11:42 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:36.508 14:11:42 -- setup/hugepages.sh@89 -- # local node 00:05:36.508 14:11:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.508 14:11:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.508 14:11:42 -- setup/hugepages.sh@92 -- # local surp 00:05:36.508 14:11:42 -- setup/hugepages.sh@93 -- # local resv 00:05:36.508 14:11:42 -- setup/hugepages.sh@94 -- # local anon 00:05:36.508 14:11:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.508 14:11:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.508 14:11:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.508 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:36.508 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:36.508 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.508 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.508 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.508 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.508 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.508 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.508 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.508 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6529580 kB' 'MemAvailable: 9459324 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497696 kB' 'Inactive: 2753472 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 51348 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 191444 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6672 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.509 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.509 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.510 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:36.510 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:36.510 14:11:42 -- setup/hugepages.sh@97 -- # anon=0 00:05:36.510 14:11:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.510 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.510 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:36.510 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:36.510 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.510 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.510 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.510 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.510 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.510 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6530084 kB' 'MemAvailable: 9459832 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497396 kB' 'Inactive: 2753476 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119392 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 191440 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 103340 kB' 'KernelStack: 6688 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.510 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.510 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.511 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:36.511 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:36.511 14:11:42 -- setup/hugepages.sh@99 -- # surp=0 00:05:36.511 14:11:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.511 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.511 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:36.511 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:36.511 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.511 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.511 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.511 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.511 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.511 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6530344 kB' 'MemAvailable: 9460092 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497456 kB' 'Inactive: 2753476 kB' 'Active(anon): 128288 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119400 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 191440 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 103340 kB' 'KernelStack: 6688 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.511 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.511 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.512 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.512 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.773 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:36.773 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:36.773 nr_hugepages=1024 00:05:36.773 resv_hugepages=0 00:05:36.773 14:11:42 -- setup/hugepages.sh@100 -- # resv=0 00:05:36.773 14:11:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.773 14:11:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.773 surplus_hugepages=0 00:05:36.773 14:11:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.773 anon_hugepages=0 00:05:36.773 14:11:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.773 14:11:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.773 14:11:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.773 14:11:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.773 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.773 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:36.773 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:36.773 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.773 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.773 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.773 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.773 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.773 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6530828 kB' 'MemAvailable: 9460576 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497484 kB' 'Inactive: 2753476 kB' 'Active(anon): 128316 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119400 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 191432 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 103332 kB' 'KernelStack: 6688 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.773 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.773 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.774 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.774 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.775 14:11:42 -- setup/common.sh@33 -- # echo 1024 00:05:36.775 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:36.775 14:11:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.775 14:11:42 -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.775 14:11:42 -- setup/hugepages.sh@27 -- # local node 00:05:36.775 14:11:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.775 14:11:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.775 14:11:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.775 14:11:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.775 14:11:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.775 14:11:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.775 14:11:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.775 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.775 14:11:42 -- setup/common.sh@18 -- # local node=0 00:05:36.775 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:36.775 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.775 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.775 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.775 14:11:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.775 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.775 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6530828 kB' 'MemUsed: 5708280 kB' 'SwapCached: 0 kB' 'Active: 497380 kB' 'Inactive: 2753476 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3133136 kB' 'Mapped: 51016 kB' 'AnonPages: 119292 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88100 kB' 'Slab: 191432 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 103332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.775 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.775 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # continue 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.776 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.776 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.776 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:36.776 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:36.776 14:11:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.776 14:11:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.776 node0=1024 expecting 1024 00:05:36.776 ************************************ 00:05:36.776 END TEST default_setup 00:05:36.776 ************************************ 00:05:36.776 14:11:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.776 14:11:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.776 14:11:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.776 14:11:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.776 00:05:36.776 real 0m1.119s 00:05:36.776 user 0m0.543s 00:05:36.776 sys 0m0.493s 00:05:36.776 14:11:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.776 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:05:36.776 14:11:42 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:36.776 14:11:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.776 14:11:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.776 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:05:36.776 ************************************ 00:05:36.776 START TEST per_node_1G_alloc 00:05:36.776 ************************************ 00:05:36.776 14:11:42 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:36.776 14:11:42 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:36.776 14:11:42 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:36.776 14:11:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:36.776 14:11:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:36.776 14:11:42 -- setup/hugepages.sh@51 -- # shift 00:05:36.776 14:11:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:36.776 14:11:42 -- setup/hugepages.sh@52 -- # local node_ids 00:05:36.776 14:11:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:36.776 14:11:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:36.776 14:11:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:36.776 14:11:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:36.776 14:11:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:36.776 14:11:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:36.776 14:11:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:36.776 14:11:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:36.776 14:11:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:36.776 14:11:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:36.776 14:11:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:36.776 14:11:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:36.776 14:11:42 -- setup/hugepages.sh@73 -- # return 0 00:05:36.776 14:11:42 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:36.776 14:11:42 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:36.776 14:11:42 -- setup/hugepages.sh@146 -- # setup output 00:05:36.776 14:11:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.776 14:11:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.297 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.297 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.297 14:11:42 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:37.297 14:11:42 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:37.297 14:11:42 -- setup/hugepages.sh@89 -- # local node 00:05:37.297 14:11:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.297 14:11:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.297 14:11:42 -- setup/hugepages.sh@92 -- # local surp 00:05:37.297 14:11:42 -- setup/hugepages.sh@93 -- # local resv 00:05:37.297 14:11:42 -- setup/hugepages.sh@94 -- # local anon 00:05:37.298 14:11:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.298 14:11:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.298 14:11:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.298 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:37.298 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:37.298 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.298 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.298 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.298 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.298 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.298 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7582964 kB' 'MemAvailable: 10512712 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497800 kB' 'Inactive: 2753480 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119732 kB' 'Mapped: 51116 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6704 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.298 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.298 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.299 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:37.299 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:37.299 14:11:42 -- setup/hugepages.sh@97 -- # anon=0 00:05:37.299 14:11:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.299 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.299 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:37.299 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:37.299 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.299 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.299 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.299 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.299 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.299 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7582964 kB' 'MemAvailable: 10512712 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497440 kB' 'Inactive: 2753480 kB' 'Active(anon): 128272 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119320 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6672 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.299 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.299 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.300 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:37.300 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:37.300 14:11:42 -- setup/hugepages.sh@99 -- # surp=0 00:05:37.300 14:11:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.300 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.300 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:37.300 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:37.300 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.300 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.300 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.300 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.300 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.300 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583396 kB' 'MemAvailable: 10513144 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497520 kB' 'Inactive: 2753480 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119436 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6672 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.300 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.300 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.301 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.301 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.302 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:37.302 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:37.302 nr_hugepages=512 00:05:37.302 14:11:42 -- setup/hugepages.sh@100 -- # resv=0 00:05:37.302 14:11:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:37.302 resv_hugepages=0 00:05:37.302 14:11:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.302 surplus_hugepages=0 00:05:37.302 anon_hugepages=0 00:05:37.302 14:11:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.302 14:11:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.302 14:11:42 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.302 14:11:42 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:37.302 14:11:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.302 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.302 14:11:42 -- setup/common.sh@18 -- # local node= 00:05:37.302 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:37.302 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.302 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.302 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.302 14:11:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.302 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.302 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583396 kB' 'MemAvailable: 10513144 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497192 kB' 'Inactive: 2753480 kB' 'Active(anon): 128024 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119136 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6672 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.302 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.302 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.303 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.303 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.303 14:11:42 -- setup/common.sh@33 -- # echo 512 00:05:37.303 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:37.304 14:11:42 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.304 14:11:42 -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.304 14:11:42 -- setup/hugepages.sh@27 -- # local node 00:05:37.304 14:11:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.304 14:11:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:37.304 14:11:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.304 14:11:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.304 14:11:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.304 14:11:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.304 14:11:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.304 14:11:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.304 14:11:42 -- setup/common.sh@18 -- # local node=0 00:05:37.304 14:11:42 -- setup/common.sh@19 -- # local var val 00:05:37.304 14:11:42 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.304 14:11:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.304 14:11:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.304 14:11:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.304 14:11:42 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.304 14:11:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583396 kB' 'MemUsed: 4655712 kB' 'SwapCached: 0 kB' 'Active: 497464 kB' 'Inactive: 2753480 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3133136 kB' 'Mapped: 51016 kB' 'AnonPages: 119408 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88096 kB' 'Slab: 191436 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.304 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.304 14:11:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # continue 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.305 14:11:42 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.305 14:11:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.305 14:11:42 -- setup/common.sh@33 -- # echo 0 00:05:37.305 14:11:42 -- setup/common.sh@33 -- # return 0 00:05:37.305 14:11:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.305 14:11:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.305 14:11:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.305 14:11:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.305 node0=512 expecting 512 00:05:37.305 14:11:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:37.305 14:11:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:37.305 00:05:37.305 real 0m0.632s 00:05:37.305 user 0m0.301s 00:05:37.305 sys 0m0.347s 00:05:37.305 14:11:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.305 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:05:37.305 ************************************ 00:05:37.305 END TEST per_node_1G_alloc 00:05:37.305 ************************************ 00:05:37.566 14:11:42 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:37.566 14:11:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.566 14:11:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.566 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:05:37.566 ************************************ 00:05:37.566 START TEST even_2G_alloc 00:05:37.566 ************************************ 00:05:37.566 14:11:42 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:37.566 14:11:42 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:37.566 14:11:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:37.566 14:11:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:37.566 14:11:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.566 14:11:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:37.566 14:11:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:37.566 14:11:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:37.566 14:11:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.566 14:11:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:37.566 14:11:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.566 14:11:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.566 14:11:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.566 14:11:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:37.566 14:11:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:37.566 14:11:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:37.566 14:11:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:37.566 14:11:42 -- setup/hugepages.sh@83 -- # : 0 00:05:37.566 14:11:42 -- setup/hugepages.sh@84 -- # : 0 00:05:37.566 14:11:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:37.566 14:11:42 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:37.566 14:11:42 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:37.566 14:11:42 -- setup/hugepages.sh@153 -- # setup output 00:05:37.566 14:11:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.566 14:11:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.839 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.839 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.839 14:11:43 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:37.839 14:11:43 -- setup/hugepages.sh@89 -- # local node 00:05:37.839 14:11:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.839 14:11:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.839 14:11:43 -- setup/hugepages.sh@92 -- # local surp 00:05:37.839 14:11:43 -- setup/hugepages.sh@93 -- # local resv 00:05:37.839 14:11:43 -- setup/hugepages.sh@94 -- # local anon 00:05:37.839 14:11:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.839 14:11:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.839 14:11:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.839 14:11:43 -- setup/common.sh@18 -- # local node= 00:05:37.839 14:11:43 -- setup/common.sh@19 -- # local var val 00:05:37.839 14:11:43 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.839 14:11:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.839 14:11:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.839 14:11:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.839 14:11:43 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.839 14:11:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6538484 kB' 'MemAvailable: 9468232 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497644 kB' 'Inactive: 2753480 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119560 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191444 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103348 kB' 'KernelStack: 6648 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.839 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.839 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.840 14:11:43 -- setup/common.sh@33 -- # echo 0 00:05:37.840 14:11:43 -- setup/common.sh@33 -- # return 0 00:05:37.840 14:11:43 -- setup/hugepages.sh@97 -- # anon=0 00:05:37.840 14:11:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.840 14:11:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.840 14:11:43 -- setup/common.sh@18 -- # local node= 00:05:37.840 14:11:43 -- setup/common.sh@19 -- # local var val 00:05:37.840 14:11:43 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.840 14:11:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.840 14:11:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.840 14:11:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.840 14:11:43 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.840 14:11:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.840 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.840 14:11:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6538484 kB' 'MemAvailable: 9468232 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497504 kB' 'Inactive: 2753480 kB' 'Active(anon): 128336 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 51132 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191452 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103356 kB' 'KernelStack: 6648 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:37.840 14:11:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.841 14:11:43 -- setup/common.sh@32 -- # continue 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.841 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.117 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.117 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.118 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.118 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.118 14:11:43 -- setup/common.sh@33 -- # echo 0 00:05:38.118 14:11:43 -- setup/common.sh@33 -- # return 0 00:05:38.118 14:11:43 -- setup/hugepages.sh@99 -- # surp=0 00:05:38.118 14:11:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:38.118 14:11:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:38.118 14:11:43 -- setup/common.sh@18 -- # local node= 00:05:38.118 14:11:43 -- setup/common.sh@19 -- # local var val 00:05:38.119 14:11:43 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.119 14:11:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.119 14:11:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.119 14:11:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.119 14:11:43 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.119 14:11:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6538484 kB' 'MemAvailable: 9468232 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497236 kB' 'Inactive: 2753480 kB' 'Active(anon): 128068 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119424 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191444 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103348 kB' 'KernelStack: 6688 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.119 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.119 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.120 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.120 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.121 14:11:43 -- setup/common.sh@33 -- # echo 0 00:05:38.121 14:11:43 -- setup/common.sh@33 -- # return 0 00:05:38.121 14:11:43 -- setup/hugepages.sh@100 -- # resv=0 00:05:38.121 nr_hugepages=1024 00:05:38.121 14:11:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:38.121 resv_hugepages=0 00:05:38.121 14:11:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:38.121 surplus_hugepages=0 00:05:38.121 14:11:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:38.121 anon_hugepages=0 00:05:38.121 14:11:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:38.121 14:11:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.121 14:11:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:38.121 14:11:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:38.121 14:11:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:38.121 14:11:43 -- setup/common.sh@18 -- # local node= 00:05:38.121 14:11:43 -- setup/common.sh@19 -- # local var val 00:05:38.121 14:11:43 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.121 14:11:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.121 14:11:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.121 14:11:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.121 14:11:43 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.121 14:11:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6538736 kB' 'MemAvailable: 9468484 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497464 kB' 'Inactive: 2753480 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191444 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103348 kB' 'KernelStack: 6688 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.121 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.121 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.122 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.122 14:11:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.123 14:11:43 -- setup/common.sh@33 -- # echo 1024 00:05:38.123 14:11:43 -- setup/common.sh@33 -- # return 0 00:05:38.123 14:11:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.123 14:11:43 -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.123 14:11:43 -- setup/hugepages.sh@27 -- # local node 00:05:38.123 14:11:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.123 14:11:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:38.123 14:11:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.123 14:11:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.123 14:11:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.123 14:11:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.123 14:11:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.123 14:11:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.123 14:11:43 -- setup/common.sh@18 -- # local node=0 00:05:38.123 14:11:43 -- setup/common.sh@19 -- # local var val 00:05:38.123 14:11:43 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.123 14:11:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.123 14:11:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.123 14:11:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.123 14:11:43 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.123 14:11:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6538736 kB' 'MemUsed: 5700372 kB' 'SwapCached: 0 kB' 'Active: 497252 kB' 'Inactive: 2753480 kB' 'Active(anon): 128084 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133136 kB' 'Mapped: 51016 kB' 'AnonPages: 119432 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88096 kB' 'Slab: 191444 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.123 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.123 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.124 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.124 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # continue 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.125 14:11:43 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.125 14:11:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.125 14:11:43 -- setup/common.sh@33 -- # echo 0 00:05:38.125 14:11:43 -- setup/common.sh@33 -- # return 0 00:05:38.125 14:11:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.125 14:11:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.125 14:11:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.125 node0=1024 expecting 1024 00:05:38.125 14:11:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:38.125 14:11:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:38.125 00:05:38.125 real 0m0.624s 00:05:38.125 user 0m0.310s 00:05:38.125 sys 0m0.353s 00:05:38.125 14:11:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.125 14:11:43 -- common/autotest_common.sh@10 -- # set +x 00:05:38.125 ************************************ 00:05:38.125 END TEST even_2G_alloc 00:05:38.125 ************************************ 00:05:38.125 14:11:43 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:38.125 14:11:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.125 14:11:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.125 14:11:43 -- common/autotest_common.sh@10 -- # set +x 00:05:38.125 ************************************ 00:05:38.125 START TEST odd_alloc 00:05:38.125 ************************************ 00:05:38.125 14:11:43 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:38.125 14:11:43 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:38.125 14:11:43 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:38.125 14:11:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:38.125 14:11:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.125 14:11:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.125 14:11:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.125 14:11:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:38.125 14:11:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.125 14:11:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.125 14:11:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.125 14:11:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:38.125 14:11:43 -- setup/hugepages.sh@83 -- # : 0 00:05:38.125 14:11:43 -- setup/hugepages.sh@84 -- # : 0 00:05:38.125 14:11:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.125 14:11:43 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:38.125 14:11:43 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:38.125 14:11:43 -- setup/hugepages.sh@160 -- # setup output 00:05:38.125 14:11:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.125 14:11:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.657 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.657 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.657 14:11:44 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:38.657 14:11:44 -- setup/hugepages.sh@89 -- # local node 00:05:38.657 14:11:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:38.657 14:11:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:38.657 14:11:44 -- setup/hugepages.sh@92 -- # local surp 00:05:38.657 14:11:44 -- setup/hugepages.sh@93 -- # local resv 00:05:38.657 14:11:44 -- setup/hugepages.sh@94 -- # local anon 00:05:38.657 14:11:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:38.657 14:11:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:38.657 14:11:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:38.657 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:38.657 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:38.657 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.657 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.657 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.657 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.657 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.657 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.657 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534692 kB' 'MemAvailable: 9464440 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497276 kB' 'Inactive: 2753480 kB' 'Active(anon): 128108 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119504 kB' 'Mapped: 51172 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191424 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103328 kB' 'KernelStack: 6696 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.657 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.657 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.658 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.658 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:38.658 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:38.658 14:11:44 -- setup/hugepages.sh@97 -- # anon=0 00:05:38.658 14:11:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:38.658 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.658 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:38.658 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:38.658 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.658 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.658 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.658 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.658 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.658 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.658 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534692 kB' 'MemAvailable: 9464440 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497140 kB' 'Inactive: 2753480 kB' 'Active(anon): 127972 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119132 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191428 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103332 kB' 'KernelStack: 6672 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.659 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.659 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.660 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:38.660 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:38.660 14:11:44 -- setup/hugepages.sh@99 -- # surp=0 00:05:38.660 14:11:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:38.660 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:38.660 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:38.660 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:38.660 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.660 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.660 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.660 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.660 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.660 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534692 kB' 'MemAvailable: 9464440 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497140 kB' 'Inactive: 2753480 kB' 'Active(anon): 127972 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119132 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191428 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103332 kB' 'KernelStack: 6672 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.660 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.660 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.661 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.661 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:38.661 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:38.661 14:11:44 -- setup/hugepages.sh@100 -- # resv=0 00:05:38.661 nr_hugepages=1025 00:05:38.661 14:11:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:38.661 resv_hugepages=0 00:05:38.661 14:11:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:38.661 surplus_hugepages=0 00:05:38.661 14:11:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:38.661 anon_hugepages=0 00:05:38.661 14:11:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:38.661 14:11:44 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:38.661 14:11:44 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:38.661 14:11:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:38.661 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:38.661 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:38.661 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:38.661 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.661 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.661 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.661 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.661 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.661 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.661 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534692 kB' 'MemAvailable: 9464440 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497120 kB' 'Inactive: 2753480 kB' 'Active(anon): 127952 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191416 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103320 kB' 'KernelStack: 6656 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.662 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.662 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.663 14:11:44 -- setup/common.sh@33 -- # echo 1025 00:05:38.663 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:38.663 14:11:44 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:38.663 14:11:44 -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.663 14:11:44 -- setup/hugepages.sh@27 -- # local node 00:05:38.663 14:11:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.663 14:11:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:38.663 14:11:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.663 14:11:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.663 14:11:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.663 14:11:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.663 14:11:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.663 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.663 14:11:44 -- setup/common.sh@18 -- # local node=0 00:05:38.663 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:38.663 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.663 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.663 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.663 14:11:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.663 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.663 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534692 kB' 'MemUsed: 5704416 kB' 'SwapCached: 0 kB' 'Active: 497140 kB' 'Inactive: 2753480 kB' 'Active(anon): 127972 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133136 kB' 'Mapped: 51016 kB' 'AnonPages: 119432 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88096 kB' 'Slab: 191412 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.663 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.663 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # continue 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.664 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.664 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.664 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:38.664 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:38.664 14:11:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.664 14:11:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.664 14:11:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.664 14:11:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.664 node0=1025 expecting 1025 00:05:38.664 14:11:44 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:38.664 14:11:44 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:38.664 00:05:38.664 real 0m0.587s 00:05:38.664 user 0m0.266s 00:05:38.664 sys 0m0.357s 00:05:38.664 14:11:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.664 14:11:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.664 ************************************ 00:05:38.664 END TEST odd_alloc 00:05:38.664 ************************************ 00:05:38.664 14:11:44 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:38.664 14:11:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.664 14:11:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.664 14:11:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.923 ************************************ 00:05:38.923 START TEST custom_alloc 00:05:38.923 ************************************ 00:05:38.923 14:11:44 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:38.923 14:11:44 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:38.923 14:11:44 -- setup/hugepages.sh@169 -- # local node 00:05:38.923 14:11:44 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:38.923 14:11:44 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:38.923 14:11:44 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:38.923 14:11:44 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:38.923 14:11:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:38.923 14:11:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:38.923 14:11:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.923 14:11:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.923 14:11:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.923 14:11:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:38.923 14:11:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.923 14:11:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.923 14:11:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.923 14:11:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:38.923 14:11:44 -- setup/hugepages.sh@83 -- # : 0 00:05:38.923 14:11:44 -- setup/hugepages.sh@84 -- # : 0 00:05:38.923 14:11:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:38.923 14:11:44 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:38.923 14:11:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:38.923 14:11:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:38.923 14:11:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.923 14:11:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.923 14:11:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:38.923 14:11:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.923 14:11:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.923 14:11:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.923 14:11:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:38.923 14:11:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:38.923 14:11:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:38.923 14:11:44 -- setup/hugepages.sh@78 -- # return 0 00:05:38.923 14:11:44 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:38.923 14:11:44 -- setup/hugepages.sh@187 -- # setup output 00:05:38.923 14:11:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.923 14:11:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.184 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.184 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.184 14:11:44 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:39.184 14:11:44 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:39.184 14:11:44 -- setup/hugepages.sh@89 -- # local node 00:05:39.184 14:11:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.184 14:11:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.184 14:11:44 -- setup/hugepages.sh@92 -- # local surp 00:05:39.184 14:11:44 -- setup/hugepages.sh@93 -- # local resv 00:05:39.184 14:11:44 -- setup/hugepages.sh@94 -- # local anon 00:05:39.184 14:11:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.184 14:11:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.184 14:11:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.184 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:39.184 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:39.184 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.184 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.184 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.184 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.184 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.184 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583452 kB' 'MemAvailable: 10513200 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497748 kB' 'Inactive: 2753480 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119708 kB' 'Mapped: 51124 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191420 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103324 kB' 'KernelStack: 6612 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.184 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.184 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.185 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:39.185 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:39.185 14:11:44 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.185 14:11:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.185 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.185 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:39.185 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:39.185 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.185 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.185 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.185 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.185 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.185 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583204 kB' 'MemAvailable: 10512952 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497408 kB' 'Inactive: 2753480 kB' 'Active(anon): 128240 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119328 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191444 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103348 kB' 'KernelStack: 6656 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.185 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.185 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.186 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.186 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.187 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:39.187 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:39.187 14:11:44 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.187 14:11:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.187 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.187 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:39.187 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:39.187 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.187 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.187 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.187 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.187 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.187 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583204 kB' 'MemAvailable: 10512952 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497664 kB' 'Inactive: 2753480 kB' 'Active(anon): 128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6672 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.187 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.187 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.447 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.447 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.448 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.448 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.449 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:39.449 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:39.449 14:11:44 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.449 nr_hugepages=512 00:05:39.449 14:11:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:39.449 resv_hugepages=0 00:05:39.449 14:11:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.449 surplus_hugepages=0 00:05:39.449 14:11:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.449 anon_hugepages=0 00:05:39.449 14:11:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.449 14:11:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:39.449 14:11:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:39.449 14:11:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.449 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.449 14:11:44 -- setup/common.sh@18 -- # local node= 00:05:39.449 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:39.449 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.449 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.449 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.449 14:11:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.449 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.449 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583204 kB' 'MemAvailable: 10512952 kB' 'Buffers: 2684 kB' 'Cached: 3130452 kB' 'SwapCached: 0 kB' 'Active: 497404 kB' 'Inactive: 2753480 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119360 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'KernelStack: 6672 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 327980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.449 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.449 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.450 14:11:44 -- setup/common.sh@33 -- # echo 512 00:05:39.450 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:39.450 14:11:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:39.450 14:11:44 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.450 14:11:44 -- setup/hugepages.sh@27 -- # local node 00:05:39.450 14:11:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.450 14:11:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:39.450 14:11:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.450 14:11:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.450 14:11:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.450 14:11:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.450 14:11:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.450 14:11:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.450 14:11:44 -- setup/common.sh@18 -- # local node=0 00:05:39.450 14:11:44 -- setup/common.sh@19 -- # local var val 00:05:39.450 14:11:44 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.450 14:11:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.450 14:11:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.450 14:11:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.450 14:11:44 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.450 14:11:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7583204 kB' 'MemUsed: 4655904 kB' 'SwapCached: 0 kB' 'Active: 497412 kB' 'Inactive: 2753480 kB' 'Active(anon): 128244 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133136 kB' 'Mapped: 51016 kB' 'AnonPages: 119364 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88096 kB' 'Slab: 191440 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.450 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.450 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # continue 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.451 14:11:44 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.451 14:11:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.451 14:11:44 -- setup/common.sh@33 -- # echo 0 00:05:39.451 14:11:44 -- setup/common.sh@33 -- # return 0 00:05:39.451 14:11:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.451 14:11:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.451 node0=512 expecting 512 00:05:39.451 ************************************ 00:05:39.451 END TEST custom_alloc 00:05:39.451 ************************************ 00:05:39.451 14:11:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.451 14:11:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.451 14:11:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:39.451 14:11:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:39.451 00:05:39.451 real 0m0.620s 00:05:39.451 user 0m0.308s 00:05:39.451 sys 0m0.335s 00:05:39.451 14:11:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.451 14:11:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.451 14:11:44 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:39.451 14:11:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.451 14:11:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.451 14:11:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.451 ************************************ 00:05:39.451 START TEST no_shrink_alloc 00:05:39.451 ************************************ 00:05:39.451 14:11:44 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:39.451 14:11:44 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:39.451 14:11:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:39.451 14:11:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:39.451 14:11:44 -- setup/hugepages.sh@51 -- # shift 00:05:39.451 14:11:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:39.451 14:11:44 -- setup/hugepages.sh@52 -- # local node_ids 00:05:39.451 14:11:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.451 14:11:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:39.451 14:11:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:39.451 14:11:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:39.451 14:11:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.451 14:11:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:39.451 14:11:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.451 14:11:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.451 14:11:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.451 14:11:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:39.451 14:11:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:39.452 14:11:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:39.452 14:11:44 -- setup/hugepages.sh@73 -- # return 0 00:05:39.452 14:11:44 -- setup/hugepages.sh@198 -- # setup output 00:05:39.452 14:11:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.452 14:11:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.020 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.020 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.020 14:11:45 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:40.020 14:11:45 -- setup/hugepages.sh@89 -- # local node 00:05:40.020 14:11:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.020 14:11:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.020 14:11:45 -- setup/hugepages.sh@92 -- # local surp 00:05:40.020 14:11:45 -- setup/hugepages.sh@93 -- # local resv 00:05:40.020 14:11:45 -- setup/hugepages.sh@94 -- # local anon 00:05:40.020 14:11:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.020 14:11:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.020 14:11:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.020 14:11:45 -- setup/common.sh@18 -- # local node= 00:05:40.020 14:11:45 -- setup/common.sh@19 -- # local var val 00:05:40.020 14:11:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.020 14:11:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.020 14:11:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.020 14:11:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.020 14:11:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.020 14:11:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.020 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.020 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6532184 kB' 'MemAvailable: 9461936 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 497596 kB' 'Inactive: 2753484 kB' 'Active(anon): 128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 51148 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191452 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103356 kB' 'KernelStack: 6664 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 328180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.021 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.021 14:11:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.022 14:11:45 -- setup/common.sh@33 -- # echo 0 00:05:40.022 14:11:45 -- setup/common.sh@33 -- # return 0 00:05:40.022 14:11:45 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.022 14:11:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.022 14:11:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.022 14:11:45 -- setup/common.sh@18 -- # local node= 00:05:40.022 14:11:45 -- setup/common.sh@19 -- # local var val 00:05:40.022 14:11:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.022 14:11:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.022 14:11:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.022 14:11:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.022 14:11:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.022 14:11:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6532184 kB' 'MemAvailable: 9461936 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 497580 kB' 'Inactive: 2753484 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191452 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103356 kB' 'KernelStack: 6688 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 328180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.022 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.022 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.023 14:11:45 -- setup/common.sh@33 -- # echo 0 00:05:40.023 14:11:45 -- setup/common.sh@33 -- # return 0 00:05:40.023 14:11:45 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.023 14:11:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.023 14:11:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.023 14:11:45 -- setup/common.sh@18 -- # local node= 00:05:40.023 14:11:45 -- setup/common.sh@19 -- # local var val 00:05:40.023 14:11:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.023 14:11:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.023 14:11:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.023 14:11:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.023 14:11:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.023 14:11:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6532904 kB' 'MemAvailable: 9462656 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 497504 kB' 'Inactive: 2753484 kB' 'Active(anon): 128336 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191452 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103356 kB' 'KernelStack: 6672 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 328180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.023 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.023 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.024 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.024 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.025 14:11:45 -- setup/common.sh@33 -- # echo 0 00:05:40.025 14:11:45 -- setup/common.sh@33 -- # return 0 00:05:40.025 nr_hugepages=1024 00:05:40.025 14:11:45 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.025 14:11:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.025 resv_hugepages=0 00:05:40.025 14:11:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.025 surplus_hugepages=0 00:05:40.025 anon_hugepages=0 00:05:40.025 14:11:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.025 14:11:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.025 14:11:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.025 14:11:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.025 14:11:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.025 14:11:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.025 14:11:45 -- setup/common.sh@18 -- # local node= 00:05:40.025 14:11:45 -- setup/common.sh@19 -- # local var val 00:05:40.025 14:11:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.025 14:11:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.025 14:11:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.025 14:11:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.025 14:11:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.025 14:11:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6532904 kB' 'MemAvailable: 9462656 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 497480 kB' 'Inactive: 2753484 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119396 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88096 kB' 'Slab: 191452 kB' 'SReclaimable: 88096 kB' 'SUnreclaim: 103356 kB' 'KernelStack: 6672 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 328180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.025 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.025 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.026 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.026 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.027 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.027 14:11:45 -- setup/common.sh@33 -- # echo 1024 00:05:40.027 14:11:45 -- setup/common.sh@33 -- # return 0 00:05:40.027 14:11:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.027 14:11:45 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.027 14:11:45 -- setup/hugepages.sh@27 -- # local node 00:05:40.027 14:11:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.027 14:11:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.027 14:11:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.027 14:11:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.027 14:11:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.027 14:11:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.027 14:11:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.027 14:11:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.027 14:11:45 -- setup/common.sh@18 -- # local node=0 00:05:40.027 14:11:45 -- setup/common.sh@19 -- # local var val 00:05:40.027 14:11:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.027 14:11:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.027 14:11:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.027 14:11:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.027 14:11:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.027 14:11:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.027 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6532904 kB' 'MemUsed: 5706204 kB' 'SwapCached: 0 kB' 'Active: 497484 kB' 'Inactive: 2753484 kB' 'Active(anon): 128316 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133140 kB' 'Mapped: 51016 kB' 'AnonPages: 119396 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88084 kB' 'Slab: 191440 kB' 'SReclaimable: 88084 kB' 'SUnreclaim: 103356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.028 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.028 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # continue 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.029 14:11:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.029 14:11:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.029 14:11:45 -- setup/common.sh@33 -- # echo 0 00:05:40.029 14:11:45 -- setup/common.sh@33 -- # return 0 00:05:40.029 14:11:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.029 14:11:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.029 node0=1024 expecting 1024 00:05:40.029 14:11:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.029 14:11:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.029 14:11:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.029 14:11:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.029 14:11:45 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:40.029 14:11:45 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:40.029 14:11:45 -- setup/hugepages.sh@202 -- # setup output 00:05:40.029 14:11:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.029 14:11:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.599 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.599 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.599 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:40.599 14:11:46 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:40.599 14:11:46 -- setup/hugepages.sh@89 -- # local node 00:05:40.599 14:11:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.599 14:11:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.599 14:11:46 -- setup/hugepages.sh@92 -- # local surp 00:05:40.599 14:11:46 -- setup/hugepages.sh@93 -- # local resv 00:05:40.599 14:11:46 -- setup/hugepages.sh@94 -- # local anon 00:05:40.599 14:11:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.599 14:11:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.599 14:11:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.599 14:11:46 -- setup/common.sh@18 -- # local node= 00:05:40.599 14:11:46 -- setup/common.sh@19 -- # local var val 00:05:40.599 14:11:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.599 14:11:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.599 14:11:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.599 14:11:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.599 14:11:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.599 14:11:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534740 kB' 'MemAvailable: 9464476 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 495624 kB' 'Inactive: 2753484 kB' 'Active(anon): 126456 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117352 kB' 'Mapped: 50260 kB' 'Shmem: 10488 kB' 'KReclaimable: 88060 kB' 'Slab: 191284 kB' 'SReclaimable: 88060 kB' 'SUnreclaim: 103224 kB' 'KernelStack: 6616 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 313228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.599 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.599 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.600 14:11:46 -- setup/common.sh@33 -- # echo 0 00:05:40.600 14:11:46 -- setup/common.sh@33 -- # return 0 00:05:40.600 14:11:46 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.600 14:11:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.600 14:11:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.600 14:11:46 -- setup/common.sh@18 -- # local node= 00:05:40.600 14:11:46 -- setup/common.sh@19 -- # local var val 00:05:40.600 14:11:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.600 14:11:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.600 14:11:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.600 14:11:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.600 14:11:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.600 14:11:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534740 kB' 'MemAvailable: 9464476 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 495020 kB' 'Inactive: 2753484 kB' 'Active(anon): 125852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116936 kB' 'Mapped: 50168 kB' 'Shmem: 10488 kB' 'KReclaimable: 88060 kB' 'Slab: 191280 kB' 'SReclaimable: 88060 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6576 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 313228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.601 14:11:46 -- setup/common.sh@33 -- # echo 0 00:05:40.601 14:11:46 -- setup/common.sh@33 -- # return 0 00:05:40.601 14:11:46 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.601 14:11:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.601 14:11:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.601 14:11:46 -- setup/common.sh@18 -- # local node= 00:05:40.601 14:11:46 -- setup/common.sh@19 -- # local var val 00:05:40.601 14:11:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.601 14:11:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.601 14:11:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.601 14:11:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.601 14:11:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.601 14:11:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534740 kB' 'MemAvailable: 9464476 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 495152 kB' 'Inactive: 2753484 kB' 'Active(anon): 125984 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117036 kB' 'Mapped: 50168 kB' 'Shmem: 10488 kB' 'KReclaimable: 88060 kB' 'Slab: 191280 kB' 'SReclaimable: 88060 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6592 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 315580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.602 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.602 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.603 14:11:46 -- setup/common.sh@33 -- # echo 0 00:05:40.603 14:11:46 -- setup/common.sh@33 -- # return 0 00:05:40.603 14:11:46 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.603 nr_hugepages=1024 00:05:40.603 14:11:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.603 resv_hugepages=0 00:05:40.603 14:11:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.603 surplus_hugepages=0 00:05:40.603 14:11:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.603 anon_hugepages=0 00:05:40.603 14:11:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.603 14:11:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.603 14:11:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.603 14:11:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.603 14:11:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.603 14:11:46 -- setup/common.sh@18 -- # local node= 00:05:40.603 14:11:46 -- setup/common.sh@19 -- # local var val 00:05:40.603 14:11:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.603 14:11:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.603 14:11:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.603 14:11:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.603 14:11:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.603 14:11:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534740 kB' 'MemAvailable: 9464476 kB' 'Buffers: 2684 kB' 'Cached: 3130456 kB' 'SwapCached: 0 kB' 'Active: 494800 kB' 'Inactive: 2753484 kB' 'Active(anon): 125632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116696 kB' 'Mapped: 50264 kB' 'Shmem: 10488 kB' 'KReclaimable: 88060 kB' 'Slab: 191280 kB' 'SReclaimable: 88060 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6544 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 313228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.603 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.604 14:11:46 -- setup/common.sh@33 -- # echo 1024 00:05:40.604 14:11:46 -- setup/common.sh@33 -- # return 0 00:05:40.604 14:11:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.604 14:11:46 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.604 14:11:46 -- setup/hugepages.sh@27 -- # local node 00:05:40.604 14:11:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.604 14:11:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.604 14:11:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.604 14:11:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.604 14:11:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.604 14:11:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.604 14:11:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.604 14:11:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.604 14:11:46 -- setup/common.sh@18 -- # local node=0 00:05:40.604 14:11:46 -- setup/common.sh@19 -- # local var val 00:05:40.604 14:11:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.604 14:11:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.604 14:11:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.604 14:11:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.604 14:11:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.604 14:11:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6534740 kB' 'MemUsed: 5704368 kB' 'SwapCached: 0 kB' 'Active: 494968 kB' 'Inactive: 2753484 kB' 'Active(anon): 125800 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2753484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133140 kB' 'Mapped: 50168 kB' 'AnonPages: 116924 kB' 'Shmem: 10488 kB' 'KernelStack: 6576 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88060 kB' 'Slab: 191280 kB' 'SReclaimable: 88060 kB' 'SUnreclaim: 103220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # continue 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 14:11:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 14:11:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 14:11:46 -- setup/common.sh@33 -- # echo 0 00:05:40.605 14:11:46 -- setup/common.sh@33 -- # return 0 00:05:40.605 14:11:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.605 14:11:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.605 14:11:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.605 14:11:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.605 node0=1024 expecting 1024 00:05:40.605 14:11:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.606 14:11:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.606 00:05:40.606 real 0m1.237s 00:05:40.606 user 0m0.567s 00:05:40.606 sys 0m0.690s 00:05:40.606 14:11:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.606 14:11:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.606 ************************************ 00:05:40.606 END TEST no_shrink_alloc 00:05:40.606 ************************************ 00:05:40.863 14:11:46 -- setup/hugepages.sh@217 -- # clear_hp 00:05:40.863 14:11:46 -- setup/hugepages.sh@37 -- # local node hp 00:05:40.863 14:11:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:40.863 14:11:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.863 14:11:46 -- setup/hugepages.sh@41 -- # echo 0 00:05:40.863 14:11:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.863 14:11:46 -- setup/hugepages.sh@41 -- # echo 0 00:05:40.863 14:11:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:40.863 14:11:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:40.863 ************************************ 00:05:40.863 END TEST hugepages 00:05:40.863 ************************************ 00:05:40.863 00:05:40.863 real 0m5.437s 00:05:40.863 user 0m2.549s 00:05:40.863 sys 0m2.892s 00:05:40.863 14:11:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.864 14:11:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.864 14:11:46 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:40.864 14:11:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.864 14:11:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.864 14:11:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.864 ************************************ 00:05:40.864 START TEST driver 00:05:40.864 ************************************ 00:05:40.864 14:11:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:40.864 * Looking for test storage... 00:05:40.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:40.864 14:11:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.864 14:11:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.864 14:11:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.121 14:11:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.121 14:11:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.121 14:11:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.121 14:11:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.121 14:11:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.121 14:11:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.121 14:11:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.121 14:11:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.121 14:11:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.121 14:11:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.121 14:11:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.121 14:11:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.121 14:11:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.121 14:11:46 -- scripts/common.sh@344 -- # : 1 00:05:41.121 14:11:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.121 14:11:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.121 14:11:46 -- scripts/common.sh@364 -- # decimal 1 00:05:41.121 14:11:46 -- scripts/common.sh@352 -- # local d=1 00:05:41.121 14:11:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.121 14:11:46 -- scripts/common.sh@354 -- # echo 1 00:05:41.121 14:11:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.121 14:11:46 -- scripts/common.sh@365 -- # decimal 2 00:05:41.121 14:11:46 -- scripts/common.sh@352 -- # local d=2 00:05:41.121 14:11:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.121 14:11:46 -- scripts/common.sh@354 -- # echo 2 00:05:41.121 14:11:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.121 14:11:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.121 14:11:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.121 14:11:46 -- scripts/common.sh@367 -- # return 0 00:05:41.121 14:11:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.121 14:11:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.121 --rc genhtml_branch_coverage=1 00:05:41.121 --rc genhtml_function_coverage=1 00:05:41.121 --rc genhtml_legend=1 00:05:41.121 --rc geninfo_all_blocks=1 00:05:41.121 --rc geninfo_unexecuted_blocks=1 00:05:41.121 00:05:41.121 ' 00:05:41.121 14:11:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.121 --rc genhtml_branch_coverage=1 00:05:41.121 --rc genhtml_function_coverage=1 00:05:41.121 --rc genhtml_legend=1 00:05:41.121 --rc geninfo_all_blocks=1 00:05:41.121 --rc geninfo_unexecuted_blocks=1 00:05:41.121 00:05:41.121 ' 00:05:41.121 14:11:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.121 --rc genhtml_branch_coverage=1 00:05:41.121 --rc genhtml_function_coverage=1 00:05:41.121 --rc genhtml_legend=1 00:05:41.121 --rc geninfo_all_blocks=1 00:05:41.121 --rc geninfo_unexecuted_blocks=1 00:05:41.121 00:05:41.121 ' 00:05:41.121 14:11:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.121 --rc genhtml_branch_coverage=1 00:05:41.121 --rc genhtml_function_coverage=1 00:05:41.121 --rc genhtml_legend=1 00:05:41.121 --rc geninfo_all_blocks=1 00:05:41.121 --rc geninfo_unexecuted_blocks=1 00:05:41.121 00:05:41.121 ' 00:05:41.121 14:11:46 -- setup/driver.sh@68 -- # setup reset 00:05:41.121 14:11:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:41.121 14:11:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.686 14:11:47 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:41.686 14:11:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.686 14:11:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.686 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:41.686 ************************************ 00:05:41.686 START TEST guess_driver 00:05:41.687 ************************************ 00:05:41.687 14:11:47 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:41.687 14:11:47 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:41.687 14:11:47 -- setup/driver.sh@47 -- # local fail=0 00:05:41.687 14:11:47 -- setup/driver.sh@49 -- # pick_driver 00:05:41.687 14:11:47 -- setup/driver.sh@36 -- # vfio 00:05:41.687 14:11:47 -- setup/driver.sh@21 -- # local iommu_grups 00:05:41.687 14:11:47 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:41.687 14:11:47 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:41.687 14:11:47 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:41.687 14:11:47 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:41.687 14:11:47 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:41.687 14:11:47 -- setup/driver.sh@32 -- # return 1 00:05:41.687 14:11:47 -- setup/driver.sh@38 -- # uio 00:05:41.687 14:11:47 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:41.687 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:41.687 14:11:47 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:41.687 Looking for driver=uio_pci_generic 00:05:41.687 14:11:47 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:41.687 14:11:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.687 14:11:47 -- setup/driver.sh@45 -- # setup output config 00:05:41.687 14:11:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.687 14:11:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.253 14:11:47 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:42.253 14:11:47 -- setup/driver.sh@58 -- # continue 00:05:42.253 14:11:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.510 14:11:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.510 14:11:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.510 14:11:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.510 14:11:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.510 14:11:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.510 14:11:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.510 14:11:48 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:42.510 14:11:48 -- setup/driver.sh@65 -- # setup reset 00:05:42.510 14:11:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.510 14:11:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.075 00:05:43.075 real 0m1.509s 00:05:43.075 user 0m0.590s 00:05:43.075 sys 0m0.934s 00:05:43.075 14:11:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.075 ************************************ 00:05:43.075 END TEST guess_driver 00:05:43.075 ************************************ 00:05:43.075 14:11:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 00:05:43.075 real 0m2.354s 00:05:43.075 user 0m0.919s 00:05:43.075 sys 0m1.518s 00:05:43.075 14:11:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.075 14:11:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 ************************************ 00:05:43.075 END TEST driver 00:05:43.075 ************************************ 00:05:43.333 14:11:48 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:43.333 14:11:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.333 14:11:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.334 14:11:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.334 ************************************ 00:05:43.334 START TEST devices 00:05:43.334 ************************************ 00:05:43.334 14:11:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:43.334 * Looking for test storage... 00:05:43.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:43.334 14:11:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.334 14:11:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.334 14:11:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.334 14:11:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.334 14:11:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.334 14:11:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.334 14:11:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.334 14:11:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.334 14:11:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.334 14:11:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.334 14:11:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.334 14:11:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.334 14:11:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.334 14:11:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.334 14:11:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.334 14:11:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.334 14:11:48 -- scripts/common.sh@344 -- # : 1 00:05:43.334 14:11:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.334 14:11:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.334 14:11:48 -- scripts/common.sh@364 -- # decimal 1 00:05:43.334 14:11:48 -- scripts/common.sh@352 -- # local d=1 00:05:43.334 14:11:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.334 14:11:48 -- scripts/common.sh@354 -- # echo 1 00:05:43.334 14:11:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.334 14:11:48 -- scripts/common.sh@365 -- # decimal 2 00:05:43.334 14:11:48 -- scripts/common.sh@352 -- # local d=2 00:05:43.334 14:11:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.334 14:11:48 -- scripts/common.sh@354 -- # echo 2 00:05:43.334 14:11:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.334 14:11:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.334 14:11:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.334 14:11:48 -- scripts/common.sh@367 -- # return 0 00:05:43.334 14:11:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.334 14:11:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.334 --rc genhtml_branch_coverage=1 00:05:43.334 --rc genhtml_function_coverage=1 00:05:43.334 --rc genhtml_legend=1 00:05:43.334 --rc geninfo_all_blocks=1 00:05:43.334 --rc geninfo_unexecuted_blocks=1 00:05:43.334 00:05:43.334 ' 00:05:43.334 14:11:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.334 --rc genhtml_branch_coverage=1 00:05:43.334 --rc genhtml_function_coverage=1 00:05:43.334 --rc genhtml_legend=1 00:05:43.334 --rc geninfo_all_blocks=1 00:05:43.334 --rc geninfo_unexecuted_blocks=1 00:05:43.334 00:05:43.334 ' 00:05:43.334 14:11:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.334 --rc genhtml_branch_coverage=1 00:05:43.334 --rc genhtml_function_coverage=1 00:05:43.334 --rc genhtml_legend=1 00:05:43.334 --rc geninfo_all_blocks=1 00:05:43.334 --rc geninfo_unexecuted_blocks=1 00:05:43.334 00:05:43.334 ' 00:05:43.334 14:11:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.334 --rc genhtml_branch_coverage=1 00:05:43.334 --rc genhtml_function_coverage=1 00:05:43.334 --rc genhtml_legend=1 00:05:43.334 --rc geninfo_all_blocks=1 00:05:43.334 --rc geninfo_unexecuted_blocks=1 00:05:43.334 00:05:43.334 ' 00:05:43.334 14:11:48 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:43.334 14:11:48 -- setup/devices.sh@192 -- # setup reset 00:05:43.334 14:11:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.334 14:11:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.270 14:11:49 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:44.270 14:11:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:44.270 14:11:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:44.270 14:11:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:44.270 14:11:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:44.270 14:11:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:44.270 14:11:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:44.270 14:11:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:44.270 14:11:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:44.270 14:11:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:44.270 14:11:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:44.270 14:11:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:44.270 14:11:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:44.270 14:11:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:44.270 14:11:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:44.270 14:11:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:44.270 14:11:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:44.270 14:11:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:44.270 14:11:49 -- setup/devices.sh@196 -- # blocks=() 00:05:44.270 14:11:49 -- setup/devices.sh@196 -- # declare -a blocks 00:05:44.270 14:11:49 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:44.270 14:11:49 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:44.270 14:11:49 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:44.270 14:11:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:44.270 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:44.270 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:44.270 14:11:49 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:44.270 14:11:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:44.270 14:11:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:44.270 14:11:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:44.270 No valid GPT data, bailing 00:05:44.270 14:11:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:44.270 14:11:49 -- scripts/common.sh@393 -- # pt= 00:05:44.270 14:11:49 -- scripts/common.sh@394 -- # return 1 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:44.270 14:11:49 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:44.270 14:11:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:44.270 14:11:49 -- setup/common.sh@80 -- # echo 5368709120 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:44.270 14:11:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:44.270 14:11:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:44.270 14:11:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:44.270 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:44.270 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:44.270 14:11:49 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:44.270 14:11:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:44.270 14:11:49 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:44.270 14:11:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:44.270 No valid GPT data, bailing 00:05:44.270 14:11:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:44.270 14:11:49 -- scripts/common.sh@393 -- # pt= 00:05:44.270 14:11:49 -- scripts/common.sh@394 -- # return 1 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:44.270 14:11:49 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:44.270 14:11:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:44.270 14:11:49 -- setup/common.sh@80 -- # echo 4294967296 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:44.270 14:11:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:44.270 14:11:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:44.270 14:11:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:44.270 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:44.270 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:44.270 14:11:49 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:44.270 14:11:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:44.270 14:11:49 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:44.270 14:11:49 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:44.270 14:11:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:44.529 No valid GPT data, bailing 00:05:44.529 14:11:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:44.529 14:11:49 -- scripts/common.sh@393 -- # pt= 00:05:44.529 14:11:49 -- scripts/common.sh@394 -- # return 1 00:05:44.529 14:11:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:44.529 14:11:49 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:44.529 14:11:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:44.529 14:11:49 -- setup/common.sh@80 -- # echo 4294967296 00:05:44.529 14:11:49 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:44.529 14:11:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:44.529 14:11:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:44.529 14:11:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:44.529 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:44.529 14:11:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:44.530 14:11:49 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:44.530 14:11:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:44.530 14:11:49 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:44.530 14:11:49 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:44.530 14:11:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:44.530 No valid GPT data, bailing 00:05:44.530 14:11:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:44.530 14:11:50 -- scripts/common.sh@393 -- # pt= 00:05:44.530 14:11:50 -- scripts/common.sh@394 -- # return 1 00:05:44.530 14:11:50 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:44.530 14:11:50 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:44.530 14:11:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:44.530 14:11:50 -- setup/common.sh@80 -- # echo 4294967296 00:05:44.530 14:11:50 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:44.530 14:11:50 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:44.530 14:11:50 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:44.530 14:11:50 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:44.530 14:11:50 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:44.530 14:11:50 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:44.530 14:11:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.530 14:11:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.530 14:11:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.530 ************************************ 00:05:44.530 START TEST nvme_mount 00:05:44.530 ************************************ 00:05:44.530 14:11:50 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:44.530 14:11:50 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:44.530 14:11:50 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:44.530 14:11:50 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.530 14:11:50 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.530 14:11:50 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:44.530 14:11:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:44.530 14:11:50 -- setup/common.sh@40 -- # local part_no=1 00:05:44.530 14:11:50 -- setup/common.sh@41 -- # local size=1073741824 00:05:44.530 14:11:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.530 14:11:50 -- setup/common.sh@44 -- # parts=() 00:05:44.530 14:11:50 -- setup/common.sh@44 -- # local parts 00:05:44.530 14:11:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.530 14:11:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.530 14:11:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.530 14:11:50 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.530 14:11:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.530 14:11:50 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:44.530 14:11:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:44.530 14:11:50 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:45.466 Creating new GPT entries in memory. 00:05:45.466 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.466 other utilities. 00:05:45.466 14:11:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.466 14:11:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.466 14:11:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.466 14:11:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.466 14:11:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:46.844 Creating new GPT entries in memory. 00:05:46.844 The operation has completed successfully. 00:05:46.844 14:11:52 -- setup/common.sh@57 -- # (( part++ )) 00:05:46.844 14:11:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.844 14:11:52 -- setup/common.sh@62 -- # wait 65855 00:05:46.844 14:11:52 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.844 14:11:52 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:46.844 14:11:52 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.844 14:11:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:46.844 14:11:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:46.844 14:11:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.844 14:11:52 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.844 14:11:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:46.844 14:11:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:46.844 14:11:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.844 14:11:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.844 14:11:52 -- setup/devices.sh@53 -- # local found=0 00:05:46.844 14:11:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.844 14:11:52 -- setup/devices.sh@56 -- # : 00:05:46.844 14:11:52 -- setup/devices.sh@59 -- # local pci status 00:05:46.844 14:11:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.844 14:11:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:46.844 14:11:52 -- setup/devices.sh@47 -- # setup output config 00:05:46.844 14:11:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.844 14:11:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:46.844 14:11:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:46.844 14:11:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:46.844 14:11:52 -- setup/devices.sh@63 -- # found=1 00:05:46.844 14:11:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.844 14:11:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:46.844 14:11:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.412 14:11:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.412 14:11:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.412 14:11:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.412 14:11:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.412 14:11:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.412 14:11:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:47.412 14:11:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.412 14:11:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.412 14:11:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.412 14:11:52 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:47.412 14:11:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.412 14:11:52 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.412 14:11:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.412 14:11:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:47.412 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:47.412 14:11:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.412 14:11:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:47.671 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:47.671 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:47.671 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:47.671 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:47.671 14:11:53 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:47.671 14:11:53 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:47.671 14:11:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.671 14:11:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:47.671 14:11:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:47.671 14:11:53 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.671 14:11:53 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.671 14:11:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:47.671 14:11:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:47.671 14:11:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.671 14:11:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.671 14:11:53 -- setup/devices.sh@53 -- # local found=0 00:05:47.671 14:11:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.671 14:11:53 -- setup/devices.sh@56 -- # : 00:05:47.671 14:11:53 -- setup/devices.sh@59 -- # local pci status 00:05:47.671 14:11:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.671 14:11:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:47.671 14:11:53 -- setup/devices.sh@47 -- # setup output config 00:05:47.671 14:11:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.671 14:11:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.929 14:11:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.929 14:11:53 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:47.929 14:11:53 -- setup/devices.sh@63 -- # found=1 00:05:47.929 14:11:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.929 14:11:53 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.929 14:11:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.188 14:11:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.188 14:11:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.448 14:11:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.448 14:11:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.448 14:11:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.448 14:11:53 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:48.448 14:11:53 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.448 14:11:53 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.448 14:11:53 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.448 14:11:53 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.448 14:11:54 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:48.448 14:11:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.448 14:11:54 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:48.448 14:11:54 -- setup/devices.sh@50 -- # local mount_point= 00:05:48.448 14:11:54 -- setup/devices.sh@51 -- # local test_file= 00:05:48.448 14:11:54 -- setup/devices.sh@53 -- # local found=0 00:05:48.448 14:11:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:48.448 14:11:54 -- setup/devices.sh@59 -- # local pci status 00:05:48.448 14:11:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.448 14:11:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.448 14:11:54 -- setup/devices.sh@47 -- # setup output config 00:05:48.448 14:11:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.448 14:11:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.707 14:11:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.707 14:11:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:48.707 14:11:54 -- setup/devices.sh@63 -- # found=1 00:05:48.707 14:11:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.707 14:11:54 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.707 14:11:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.274 14:11:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.274 14:11:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.274 14:11:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.274 14:11:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.274 14:11:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.274 14:11:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.274 14:11:54 -- setup/devices.sh@68 -- # return 0 00:05:49.274 14:11:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:49.274 14:11:54 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.274 14:11:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.274 14:11:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.274 14:11:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.274 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.274 00:05:49.274 real 0m4.729s 00:05:49.274 user 0m1.125s 00:05:49.274 sys 0m1.263s 00:05:49.274 14:11:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.274 14:11:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.274 ************************************ 00:05:49.274 END TEST nvme_mount 00:05:49.274 ************************************ 00:05:49.274 14:11:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:49.274 14:11:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.274 14:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.274 14:11:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.274 ************************************ 00:05:49.274 START TEST dm_mount 00:05:49.274 ************************************ 00:05:49.274 14:11:54 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:49.274 14:11:54 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:49.274 14:11:54 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:49.274 14:11:54 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:49.274 14:11:54 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:49.274 14:11:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:49.274 14:11:54 -- setup/common.sh@40 -- # local part_no=2 00:05:49.274 14:11:54 -- setup/common.sh@41 -- # local size=1073741824 00:05:49.274 14:11:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:49.274 14:11:54 -- setup/common.sh@44 -- # parts=() 00:05:49.274 14:11:54 -- setup/common.sh@44 -- # local parts 00:05:49.274 14:11:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:49.274 14:11:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.274 14:11:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.274 14:11:54 -- setup/common.sh@46 -- # (( part++ )) 00:05:49.274 14:11:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.274 14:11:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.274 14:11:54 -- setup/common.sh@46 -- # (( part++ )) 00:05:49.274 14:11:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.274 14:11:54 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:49.274 14:11:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:49.274 14:11:54 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:50.648 Creating new GPT entries in memory. 00:05:50.648 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:50.648 other utilities. 00:05:50.648 14:11:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:50.648 14:11:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.648 14:11:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.648 14:11:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.648 14:11:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:51.584 Creating new GPT entries in memory. 00:05:51.584 The operation has completed successfully. 00:05:51.584 14:11:56 -- setup/common.sh@57 -- # (( part++ )) 00:05:51.584 14:11:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.584 14:11:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:51.584 14:11:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:51.584 14:11:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:52.520 The operation has completed successfully. 00:05:52.520 14:11:57 -- setup/common.sh@57 -- # (( part++ )) 00:05:52.520 14:11:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.520 14:11:57 -- setup/common.sh@62 -- # wait 66314 00:05:52.520 14:11:58 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:52.520 14:11:58 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.520 14:11:58 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.520 14:11:58 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:52.520 14:11:58 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:52.521 14:11:58 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.521 14:11:58 -- setup/devices.sh@161 -- # break 00:05:52.521 14:11:58 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.521 14:11:58 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:52.521 14:11:58 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:52.521 14:11:58 -- setup/devices.sh@166 -- # dm=dm-0 00:05:52.521 14:11:58 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:52.521 14:11:58 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:52.521 14:11:58 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.521 14:11:58 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:52.521 14:11:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.521 14:11:58 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.521 14:11:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:52.521 14:11:58 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.521 14:11:58 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.521 14:11:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:52.521 14:11:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:52.521 14:11:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.521 14:11:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.521 14:11:58 -- setup/devices.sh@53 -- # local found=0 00:05:52.521 14:11:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.521 14:11:58 -- setup/devices.sh@56 -- # : 00:05:52.521 14:11:58 -- setup/devices.sh@59 -- # local pci status 00:05:52.521 14:11:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.521 14:11:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:52.521 14:11:58 -- setup/devices.sh@47 -- # setup output config 00:05:52.521 14:11:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.521 14:11:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.779 14:11:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.780 14:11:58 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:52.780 14:11:58 -- setup/devices.sh@63 -- # found=1 00:05:52.780 14:11:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.780 14:11:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.780 14:11:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.039 14:11:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.039 14:11:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.299 14:11:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.299 14:11:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.299 14:11:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.299 14:11:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:53.299 14:11:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.299 14:11:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:53.299 14:11:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.299 14:11:58 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.299 14:11:58 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:53.299 14:11:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:53.299 14:11:58 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:53.299 14:11:58 -- setup/devices.sh@50 -- # local mount_point= 00:05:53.299 14:11:58 -- setup/devices.sh@51 -- # local test_file= 00:05:53.299 14:11:58 -- setup/devices.sh@53 -- # local found=0 00:05:53.299 14:11:58 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:53.299 14:11:58 -- setup/devices.sh@59 -- # local pci status 00:05:53.299 14:11:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.299 14:11:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:53.299 14:11:58 -- setup/devices.sh@47 -- # setup output config 00:05:53.299 14:11:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.299 14:11:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.559 14:11:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.559 14:11:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:53.559 14:11:59 -- setup/devices.sh@63 -- # found=1 00:05:53.559 14:11:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.559 14:11:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.559 14:11:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.818 14:11:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.818 14:11:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.818 14:11:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.818 14:11:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.078 14:11:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.078 14:11:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.078 14:11:59 -- setup/devices.sh@68 -- # return 0 00:05:54.078 14:11:59 -- setup/devices.sh@187 -- # cleanup_dm 00:05:54.078 14:11:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.078 14:11:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.078 14:11:59 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:54.078 14:11:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.078 14:11:59 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:54.078 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.078 14:11:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.078 14:11:59 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:54.078 00:05:54.078 real 0m4.716s 00:05:54.078 user 0m0.697s 00:05:54.078 sys 0m0.922s 00:05:54.078 14:11:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.078 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 ************************************ 00:05:54.078 END TEST dm_mount 00:05:54.078 ************************************ 00:05:54.078 14:11:59 -- setup/devices.sh@1 -- # cleanup 00:05:54.078 14:11:59 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:54.078 14:11:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.078 14:11:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.078 14:11:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:54.078 14:11:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.078 14:11:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.337 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.337 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.337 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:54.337 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:54.337 14:11:59 -- setup/devices.sh@12 -- # cleanup_dm 00:05:54.337 14:11:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.337 14:11:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.337 14:11:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.337 14:11:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.337 14:11:59 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.337 14:11:59 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:54.337 00:05:54.337 real 0m11.179s 00:05:54.337 user 0m2.606s 00:05:54.337 sys 0m2.842s 00:05:54.337 14:11:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.337 ************************************ 00:05:54.337 END TEST devices 00:05:54.337 ************************************ 00:05:54.337 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:05:54.337 00:05:54.337 real 0m24.144s 00:05:54.337 user 0m8.311s 00:05:54.337 sys 0m10.176s 00:05:54.337 14:11:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.337 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:05:54.337 ************************************ 00:05:54.337 END TEST setup.sh 00:05:54.337 ************************************ 00:05:54.596 14:12:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:54.596 Hugepages 00:05:54.596 node hugesize free / total 00:05:54.596 node0 1048576kB 0 / 0 00:05:54.596 node0 2048kB 2048 / 2048 00:05:54.596 00:05:54.596 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:54.855 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:54.856 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:54.856 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:54.856 14:12:00 -- spdk/autotest.sh@128 -- # uname -s 00:05:54.856 14:12:00 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:54.856 14:12:00 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:54.856 14:12:00 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:55.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.817 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.817 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.817 14:12:01 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:56.754 14:12:02 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:56.754 14:12:02 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:56.754 14:12:02 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:56.754 14:12:02 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:56.754 14:12:02 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:56.754 14:12:02 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:56.754 14:12:02 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.754 14:12:02 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:56.754 14:12:02 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:57.013 14:12:02 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:57.013 14:12:02 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:57.013 14:12:02 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:57.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.272 Waiting for block devices as requested 00:05:57.272 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:57.531 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:57.531 14:12:03 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:57.531 14:12:03 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:57.531 14:12:03 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:57.531 14:12:03 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:57.531 14:12:03 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:57.531 14:12:03 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1552 -- # continue 00:05:57.531 14:12:03 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:57.531 14:12:03 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:57.531 14:12:03 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:57.531 14:12:03 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:57.531 14:12:03 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:57.531 14:12:03 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:57.531 14:12:03 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:57.531 14:12:03 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:57.531 14:12:03 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:57.531 14:12:03 -- common/autotest_common.sh@1552 -- # continue 00:05:57.531 14:12:03 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:57.531 14:12:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.531 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.531 14:12:03 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:57.531 14:12:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.531 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.531 14:12:03 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.467 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.467 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.467 14:12:04 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:58.467 14:12:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.467 14:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.467 14:12:04 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:58.467 14:12:04 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:58.467 14:12:04 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:58.467 14:12:04 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:58.467 14:12:04 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:58.467 14:12:04 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:58.467 14:12:04 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:58.467 14:12:04 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:58.467 14:12:04 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.467 14:12:04 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:58.467 14:12:04 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:58.726 14:12:04 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:58.726 14:12:04 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:58.726 14:12:04 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:58.726 14:12:04 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:58.726 14:12:04 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:58.726 14:12:04 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:58.726 14:12:04 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:58.726 14:12:04 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:58.726 14:12:04 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:58.726 14:12:04 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:58.726 14:12:04 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:58.726 14:12:04 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:58.726 14:12:04 -- common/autotest_common.sh@1588 -- # return 0 00:05:58.726 14:12:04 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:58.726 14:12:04 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:58.726 14:12:04 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:58.726 14:12:04 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:58.726 14:12:04 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:58.726 14:12:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.726 14:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.726 14:12:04 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:58.726 14:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.726 14:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.726 14:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.726 ************************************ 00:05:58.726 START TEST env 00:05:58.726 ************************************ 00:05:58.726 14:12:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:58.726 * Looking for test storage... 00:05:58.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:58.726 14:12:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.726 14:12:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.727 14:12:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.986 14:12:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.986 14:12:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.986 14:12:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.986 14:12:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.986 14:12:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.986 14:12:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.986 14:12:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.986 14:12:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.986 14:12:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.986 14:12:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.986 14:12:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.986 14:12:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.986 14:12:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.986 14:12:04 -- scripts/common.sh@344 -- # : 1 00:05:58.986 14:12:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.986 14:12:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.986 14:12:04 -- scripts/common.sh@364 -- # decimal 1 00:05:58.986 14:12:04 -- scripts/common.sh@352 -- # local d=1 00:05:58.986 14:12:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.986 14:12:04 -- scripts/common.sh@354 -- # echo 1 00:05:58.986 14:12:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.987 14:12:04 -- scripts/common.sh@365 -- # decimal 2 00:05:58.987 14:12:04 -- scripts/common.sh@352 -- # local d=2 00:05:58.987 14:12:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.987 14:12:04 -- scripts/common.sh@354 -- # echo 2 00:05:58.987 14:12:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.987 14:12:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.987 14:12:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.987 14:12:04 -- scripts/common.sh@367 -- # return 0 00:05:58.987 14:12:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.987 14:12:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.987 --rc genhtml_branch_coverage=1 00:05:58.987 --rc genhtml_function_coverage=1 00:05:58.987 --rc genhtml_legend=1 00:05:58.987 --rc geninfo_all_blocks=1 00:05:58.987 --rc geninfo_unexecuted_blocks=1 00:05:58.987 00:05:58.987 ' 00:05:58.987 14:12:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.987 --rc genhtml_branch_coverage=1 00:05:58.987 --rc genhtml_function_coverage=1 00:05:58.987 --rc genhtml_legend=1 00:05:58.987 --rc geninfo_all_blocks=1 00:05:58.987 --rc geninfo_unexecuted_blocks=1 00:05:58.987 00:05:58.987 ' 00:05:58.987 14:12:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.987 --rc genhtml_branch_coverage=1 00:05:58.987 --rc genhtml_function_coverage=1 00:05:58.987 --rc genhtml_legend=1 00:05:58.987 --rc geninfo_all_blocks=1 00:05:58.987 --rc geninfo_unexecuted_blocks=1 00:05:58.987 00:05:58.987 ' 00:05:58.987 14:12:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.987 --rc genhtml_branch_coverage=1 00:05:58.987 --rc genhtml_function_coverage=1 00:05:58.987 --rc genhtml_legend=1 00:05:58.987 --rc geninfo_all_blocks=1 00:05:58.987 --rc geninfo_unexecuted_blocks=1 00:05:58.987 00:05:58.987 ' 00:05:58.987 14:12:04 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:58.987 14:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.987 14:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.987 14:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.987 ************************************ 00:05:58.987 START TEST env_memory 00:05:58.987 ************************************ 00:05:58.987 14:12:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:58.987 00:05:58.987 00:05:58.987 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.987 http://cunit.sourceforge.net/ 00:05:58.987 00:05:58.987 00:05:58.987 Suite: memory 00:05:58.987 Test: alloc and free memory map ...[2024-12-05 14:12:04.461313] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.987 passed 00:05:58.987 Test: mem map translation ...[2024-12-05 14:12:04.492715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.987 [2024-12-05 14:12:04.492764] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.987 [2024-12-05 14:12:04.492829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.987 [2024-12-05 14:12:04.492841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:58.987 passed 00:05:58.987 Test: mem map registration ...[2024-12-05 14:12:04.557118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:58.987 [2024-12-05 14:12:04.557162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:58.987 passed 00:05:59.246 Test: mem map adjacent registrations ...passed 00:05:59.246 00:05:59.246 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.246 suites 1 1 n/a 0 0 00:05:59.246 tests 4 4 4 0 0 00:05:59.246 asserts 152 152 152 0 n/a 00:05:59.246 00:05:59.246 Elapsed time = 0.213 seconds 00:05:59.246 00:05:59.246 real 0m0.233s 00:05:59.246 user 0m0.213s 00:05:59.246 sys 0m0.013s 00:05:59.246 14:12:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.246 14:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.246 ************************************ 00:05:59.246 END TEST env_memory 00:05:59.246 ************************************ 00:05:59.246 14:12:04 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:59.246 14:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.246 14:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.246 14:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.246 ************************************ 00:05:59.246 START TEST env_vtophys 00:05:59.246 ************************************ 00:05:59.246 14:12:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:59.246 EAL: lib.eal log level changed from notice to debug 00:05:59.246 EAL: Detected lcore 0 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 1 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 2 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 3 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 4 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 5 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 6 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 7 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 8 as core 0 on socket 0 00:05:59.246 EAL: Detected lcore 9 as core 0 on socket 0 00:05:59.246 EAL: Maximum logical cores by configuration: 128 00:05:59.246 EAL: Detected CPU lcores: 10 00:05:59.246 EAL: Detected NUMA nodes: 1 00:05:59.246 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:59.246 EAL: Detected shared linkage of DPDK 00:05:59.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:59.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:59.246 EAL: Registered [vdev] bus. 00:05:59.246 EAL: bus.vdev log level changed from disabled to notice 00:05:59.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:59.246 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:59.247 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:59.247 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:59.247 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:59.247 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:59.247 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:59.247 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:59.247 EAL: No shared files mode enabled, IPC will be disabled 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Selected IOVA mode 'PA' 00:05:59.247 EAL: Probing VFIO support... 00:05:59.247 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:59.247 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:59.247 EAL: Ask a virtual area of 0x2e000 bytes 00:05:59.247 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:59.247 EAL: Setting up physically contiguous memory... 00:05:59.247 EAL: Setting maximum number of open files to 524288 00:05:59.247 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:59.247 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:59.247 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.247 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:59.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.247 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.247 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:59.247 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:59.247 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.247 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:59.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.247 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.247 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:59.247 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:59.247 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.247 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:59.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.247 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.247 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:59.247 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:59.247 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.247 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:59.247 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.247 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.247 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:59.247 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:59.247 EAL: Hugepages will be freed exactly as allocated. 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: TSC frequency is ~2200000 KHz 00:05:59.247 EAL: Main lcore 0 is ready (tid=7fec16304a00;cpuset=[0]) 00:05:59.247 EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.247 EAL: Restoring previous memory policy: 0 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was expanded by 2MB 00:05:59.247 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:59.247 EAL: Mem event callback 'spdk:(nil)' registered 00:05:59.247 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:59.247 00:05:59.247 00:05:59.247 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.247 http://cunit.sourceforge.net/ 00:05:59.247 00:05:59.247 00:05:59.247 Suite: components_suite 00:05:59.247 Test: vtophys_malloc_test ...passed 00:05:59.247 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.247 EAL: Restoring previous memory policy: 4 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was expanded by 4MB 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was shrunk by 4MB 00:05:59.247 EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.247 EAL: Restoring previous memory policy: 4 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was expanded by 6MB 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was shrunk by 6MB 00:05:59.247 EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.247 EAL: Restoring previous memory policy: 4 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was expanded by 10MB 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was shrunk by 10MB 00:05:59.247 EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.247 EAL: Restoring previous memory policy: 4 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was expanded by 18MB 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was shrunk by 18MB 00:05:59.247 EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.247 EAL: Restoring previous memory policy: 4 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was expanded by 34MB 00:05:59.247 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.247 EAL: request: mp_malloc_sync 00:05:59.247 EAL: No shared files mode enabled, IPC is disabled 00:05:59.247 EAL: Heap on socket 0 was shrunk by 34MB 00:05:59.247 EAL: Trying to obtain current memory policy. 00:05:59.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.511 EAL: Restoring previous memory policy: 4 00:05:59.511 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.511 EAL: request: mp_malloc_sync 00:05:59.511 EAL: No shared files mode enabled, IPC is disabled 00:05:59.511 EAL: Heap on socket 0 was expanded by 66MB 00:05:59.511 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.511 EAL: request: mp_malloc_sync 00:05:59.511 EAL: No shared files mode enabled, IPC is disabled 00:05:59.511 EAL: Heap on socket 0 was shrunk by 66MB 00:05:59.511 EAL: Trying to obtain current memory policy. 00:05:59.511 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.511 EAL: Restoring previous memory policy: 4 00:05:59.511 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.511 EAL: request: mp_malloc_sync 00:05:59.511 EAL: No shared files mode enabled, IPC is disabled 00:05:59.511 EAL: Heap on socket 0 was expanded by 130MB 00:05:59.511 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.511 EAL: request: mp_malloc_sync 00:05:59.511 EAL: No shared files mode enabled, IPC is disabled 00:05:59.512 EAL: Heap on socket 0 was shrunk by 130MB 00:05:59.512 EAL: Trying to obtain current memory policy. 00:05:59.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.512 EAL: Restoring previous memory policy: 4 00:05:59.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.512 EAL: request: mp_malloc_sync 00:05:59.512 EAL: No shared files mode enabled, IPC is disabled 00:05:59.512 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.775 EAL: request: mp_malloc_sync 00:05:59.775 EAL: No shared files mode enabled, IPC is disabled 00:05:59.775 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.775 EAL: Trying to obtain current memory policy. 00:05:59.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.775 EAL: Restoring previous memory policy: 4 00:05:59.775 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.775 EAL: request: mp_malloc_sync 00:05:59.775 EAL: No shared files mode enabled, IPC is disabled 00:05:59.775 EAL: Heap on socket 0 was expanded by 514MB 00:06:00.033 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.033 EAL: request: mp_malloc_sync 00:06:00.033 EAL: No shared files mode enabled, IPC is disabled 00:06:00.033 EAL: Heap on socket 0 was shrunk by 514MB 00:06:00.033 EAL: Trying to obtain current memory policy. 00:06:00.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.291 EAL: Restoring previous memory policy: 4 00:06:00.291 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.291 EAL: request: mp_malloc_sync 00:06:00.291 EAL: No shared files mode enabled, IPC is disabled 00:06:00.291 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.549 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.549 passed 00:06:00.549 00:06:00.549 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.549 suites 1 1 n/a 0 0 00:06:00.549 tests 2 2 2 0 0 00:06:00.549 asserts 5358 5358 5358 0 n/a 00:06:00.549 00:06:00.549 Elapsed time = 1.292 seconds 00:06:00.549 EAL: request: mp_malloc_sync 00:06:00.549 EAL: No shared files mode enabled, IPC is disabled 00:06:00.549 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.549 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.549 EAL: request: mp_malloc_sync 00:06:00.549 EAL: No shared files mode enabled, IPC is disabled 00:06:00.549 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.549 EAL: No shared files mode enabled, IPC is disabled 00:06:00.549 EAL: No shared files mode enabled, IPC is disabled 00:06:00.549 EAL: No shared files mode enabled, IPC is disabled 00:06:00.549 ************************************ 00:06:00.549 END TEST env_vtophys 00:06:00.549 ************************************ 00:06:00.549 00:06:00.549 real 0m1.487s 00:06:00.549 user 0m0.824s 00:06:00.549 sys 0m0.530s 00:06:00.549 14:12:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.549 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.808 14:12:06 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:00.808 14:12:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.808 14:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.808 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.808 ************************************ 00:06:00.808 START TEST env_pci 00:06:00.808 ************************************ 00:06:00.808 14:12:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:00.808 00:06:00.808 00:06:00.808 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.808 http://cunit.sourceforge.net/ 00:06:00.808 00:06:00.808 00:06:00.808 Suite: pci 00:06:00.808 Test: pci_hook ...[2024-12-05 14:12:06.256424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67463 has claimed it 00:06:00.808 EAL: Cannot find device (10000:00:01.0) 00:06:00.808 passed 00:06:00.808 00:06:00.808 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.808 suites 1 1 n/a 0 0 00:06:00.808 tests 1 1 1 0 0 00:06:00.808 asserts 25 25 25 0 n/a 00:06:00.808 00:06:00.808 Elapsed time = 0.002 seconds 00:06:00.808 EAL: Failed to attach device on primary process 00:06:00.808 00:06:00.808 real 0m0.024s 00:06:00.808 user 0m0.012s 00:06:00.808 sys 0m0.010s 00:06:00.808 ************************************ 00:06:00.808 END TEST env_pci 00:06:00.808 ************************************ 00:06:00.808 14:12:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.808 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.808 14:12:06 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.808 14:12:06 -- env/env.sh@15 -- # uname 00:06:00.808 14:12:06 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.808 14:12:06 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.808 14:12:06 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.808 14:12:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:00.808 14:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.808 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.808 ************************************ 00:06:00.808 START TEST env_dpdk_post_init 00:06:00.808 ************************************ 00:06:00.808 14:12:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.808 EAL: Detected CPU lcores: 10 00:06:00.808 EAL: Detected NUMA nodes: 1 00:06:00.808 EAL: Detected shared linkage of DPDK 00:06:00.808 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.808 EAL: Selected IOVA mode 'PA' 00:06:01.076 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:01.076 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:06:01.076 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:06:01.076 Starting DPDK initialization... 00:06:01.076 Starting SPDK post initialization... 00:06:01.076 SPDK NVMe probe 00:06:01.076 Attaching to 0000:00:06.0 00:06:01.076 Attaching to 0000:00:07.0 00:06:01.076 Attached to 0000:00:06.0 00:06:01.076 Attached to 0000:00:07.0 00:06:01.076 Cleaning up... 00:06:01.076 00:06:01.076 real 0m0.169s 00:06:01.076 user 0m0.036s 00:06:01.076 sys 0m0.033s 00:06:01.076 ************************************ 00:06:01.076 END TEST env_dpdk_post_init 00:06:01.076 ************************************ 00:06:01.076 14:12:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.076 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.076 14:12:06 -- env/env.sh@26 -- # uname 00:06:01.076 14:12:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.076 14:12:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.076 14:12:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.076 14:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.076 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.076 ************************************ 00:06:01.076 START TEST env_mem_callbacks 00:06:01.076 ************************************ 00:06:01.076 14:12:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.076 EAL: Detected CPU lcores: 10 00:06:01.077 EAL: Detected NUMA nodes: 1 00:06:01.077 EAL: Detected shared linkage of DPDK 00:06:01.077 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.077 EAL: Selected IOVA mode 'PA' 00:06:01.077 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:01.077 00:06:01.077 00:06:01.077 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.077 http://cunit.sourceforge.net/ 00:06:01.077 00:06:01.077 00:06:01.077 Suite: memory 00:06:01.077 Test: test ... 00:06:01.077 register 0x200000200000 2097152 00:06:01.077 malloc 3145728 00:06:01.077 register 0x200000400000 4194304 00:06:01.077 buf 0x200000500000 len 3145728 PASSED 00:06:01.077 malloc 64 00:06:01.077 buf 0x2000004fff40 len 64 PASSED 00:06:01.077 malloc 4194304 00:06:01.077 register 0x200000800000 6291456 00:06:01.077 buf 0x200000a00000 len 4194304 PASSED 00:06:01.077 free 0x200000500000 3145728 00:06:01.077 free 0x2000004fff40 64 00:06:01.077 unregister 0x200000400000 4194304 PASSED 00:06:01.077 free 0x200000a00000 4194304 00:06:01.077 unregister 0x200000800000 6291456 PASSED 00:06:01.077 malloc 8388608 00:06:01.077 register 0x200000400000 10485760 00:06:01.077 buf 0x200000600000 len 8388608 PASSED 00:06:01.077 free 0x200000600000 8388608 00:06:01.077 unregister 0x200000400000 10485760 PASSED 00:06:01.077 passed 00:06:01.077 00:06:01.077 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.077 suites 1 1 n/a 0 0 00:06:01.077 tests 1 1 1 0 0 00:06:01.077 asserts 15 15 15 0 n/a 00:06:01.077 00:06:01.077 Elapsed time = 0.009 seconds 00:06:01.077 00:06:01.077 real 0m0.147s 00:06:01.077 user 0m0.013s 00:06:01.077 sys 0m0.030s 00:06:01.077 14:12:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.077 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.077 ************************************ 00:06:01.077 END TEST env_mem_callbacks 00:06:01.077 ************************************ 00:06:01.336 ************************************ 00:06:01.336 END TEST env 00:06:01.336 ************************************ 00:06:01.336 00:06:01.336 real 0m2.557s 00:06:01.336 user 0m1.302s 00:06:01.336 sys 0m0.897s 00:06:01.336 14:12:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.336 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.336 14:12:06 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:01.336 14:12:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.336 14:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.336 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.336 ************************************ 00:06:01.336 START TEST rpc 00:06:01.336 ************************************ 00:06:01.336 14:12:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:01.336 * Looking for test storage... 00:06:01.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.336 14:12:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.336 14:12:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.336 14:12:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.595 14:12:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.595 14:12:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.595 14:12:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.595 14:12:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.595 14:12:06 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.595 14:12:06 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.595 14:12:06 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.595 14:12:06 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.595 14:12:06 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.595 14:12:06 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.595 14:12:06 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.595 14:12:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.595 14:12:06 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.595 14:12:06 -- scripts/common.sh@344 -- # : 1 00:06:01.595 14:12:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.595 14:12:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.595 14:12:06 -- scripts/common.sh@364 -- # decimal 1 00:06:01.595 14:12:06 -- scripts/common.sh@352 -- # local d=1 00:06:01.595 14:12:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.595 14:12:07 -- scripts/common.sh@354 -- # echo 1 00:06:01.595 14:12:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.595 14:12:07 -- scripts/common.sh@365 -- # decimal 2 00:06:01.595 14:12:07 -- scripts/common.sh@352 -- # local d=2 00:06:01.595 14:12:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.595 14:12:07 -- scripts/common.sh@354 -- # echo 2 00:06:01.595 14:12:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.595 14:12:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.595 14:12:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.595 14:12:07 -- scripts/common.sh@367 -- # return 0 00:06:01.595 14:12:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.595 14:12:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.595 --rc genhtml_branch_coverage=1 00:06:01.595 --rc genhtml_function_coverage=1 00:06:01.595 --rc genhtml_legend=1 00:06:01.595 --rc geninfo_all_blocks=1 00:06:01.595 --rc geninfo_unexecuted_blocks=1 00:06:01.595 00:06:01.595 ' 00:06:01.595 14:12:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.595 --rc genhtml_branch_coverage=1 00:06:01.595 --rc genhtml_function_coverage=1 00:06:01.595 --rc genhtml_legend=1 00:06:01.595 --rc geninfo_all_blocks=1 00:06:01.595 --rc geninfo_unexecuted_blocks=1 00:06:01.595 00:06:01.595 ' 00:06:01.595 14:12:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.595 --rc genhtml_branch_coverage=1 00:06:01.595 --rc genhtml_function_coverage=1 00:06:01.595 --rc genhtml_legend=1 00:06:01.595 --rc geninfo_all_blocks=1 00:06:01.595 --rc geninfo_unexecuted_blocks=1 00:06:01.595 00:06:01.595 ' 00:06:01.595 14:12:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.596 --rc genhtml_branch_coverage=1 00:06:01.596 --rc genhtml_function_coverage=1 00:06:01.596 --rc genhtml_legend=1 00:06:01.596 --rc geninfo_all_blocks=1 00:06:01.596 --rc geninfo_unexecuted_blocks=1 00:06:01.596 00:06:01.596 ' 00:06:01.596 14:12:07 -- rpc/rpc.sh@65 -- # spdk_pid=67579 00:06:01.596 14:12:07 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.596 14:12:07 -- rpc/rpc.sh@67 -- # waitforlisten 67579 00:06:01.596 14:12:07 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:01.596 14:12:07 -- common/autotest_common.sh@829 -- # '[' -z 67579 ']' 00:06:01.596 14:12:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.596 14:12:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.596 14:12:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.596 14:12:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.596 14:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.596 [2024-12-05 14:12:07.080025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.596 [2024-12-05 14:12:07.080321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67579 ] 00:06:01.596 [2024-12-05 14:12:07.219021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.854 [2024-12-05 14:12:07.277490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.854 [2024-12-05 14:12:07.278006] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:01.854 [2024-12-05 14:12:07.278172] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67579' to capture a snapshot of events at runtime. 00:06:01.854 [2024-12-05 14:12:07.278377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67579 for offline analysis/debug. 00:06:01.854 [2024-12-05 14:12:07.278537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.792 14:12:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.792 14:12:08 -- common/autotest_common.sh@862 -- # return 0 00:06:02.792 14:12:08 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.792 14:12:08 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.792 14:12:08 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:02.792 14:12:08 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:02.792 14:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.792 14:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.792 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 ************************************ 00:06:02.792 START TEST rpc_integrity 00:06:02.792 ************************************ 00:06:02.792 14:12:08 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:02.792 14:12:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.792 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.792 14:12:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.792 14:12:08 -- rpc/rpc.sh@13 -- # jq length 00:06:02.792 14:12:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.792 14:12:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.792 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.792 14:12:08 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:02.792 14:12:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.792 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.792 14:12:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.792 { 00:06:02.792 "aliases": [ 00:06:02.792 "50f129fc-b781-4862-acda-a550e4aaa1b6" 00:06:02.792 ], 00:06:02.792 "assigned_rate_limits": { 00:06:02.792 "r_mbytes_per_sec": 0, 00:06:02.792 "rw_ios_per_sec": 0, 00:06:02.792 "rw_mbytes_per_sec": 0, 00:06:02.792 "w_mbytes_per_sec": 0 00:06:02.792 }, 00:06:02.792 "block_size": 512, 00:06:02.792 "claimed": false, 00:06:02.792 "driver_specific": {}, 00:06:02.792 "memory_domains": [ 00:06:02.792 { 00:06:02.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.792 "dma_device_type": 2 00:06:02.792 } 00:06:02.792 ], 00:06:02.792 "name": "Malloc0", 00:06:02.792 "num_blocks": 16384, 00:06:02.792 "product_name": "Malloc disk", 00:06:02.792 "supported_io_types": { 00:06:02.792 "abort": true, 00:06:02.792 "compare": false, 00:06:02.792 "compare_and_write": false, 00:06:02.792 "flush": true, 00:06:02.792 "nvme_admin": false, 00:06:02.792 "nvme_io": false, 00:06:02.792 "read": true, 00:06:02.792 "reset": true, 00:06:02.792 "unmap": true, 00:06:02.792 "write": true, 00:06:02.792 "write_zeroes": true 00:06:02.792 }, 00:06:02.792 "uuid": "50f129fc-b781-4862-acda-a550e4aaa1b6", 00:06:02.792 "zoned": false 00:06:02.792 } 00:06:02.792 ]' 00:06:02.792 14:12:08 -- rpc/rpc.sh@17 -- # jq length 00:06:02.792 14:12:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.792 14:12:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:02.792 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 [2024-12-05 14:12:08.257527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:02.793 [2024-12-05 14:12:08.257790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.793 [2024-12-05 14:12:08.257834] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14feb60 00:06:02.793 [2024-12-05 14:12:08.257876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.793 [2024-12-05 14:12:08.259164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.793 [2024-12-05 14:12:08.259199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.793 Passthru0 00:06:02.793 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.793 14:12:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.793 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.793 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.793 14:12:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.793 { 00:06:02.793 "aliases": [ 00:06:02.793 "50f129fc-b781-4862-acda-a550e4aaa1b6" 00:06:02.793 ], 00:06:02.793 "assigned_rate_limits": { 00:06:02.793 "r_mbytes_per_sec": 0, 00:06:02.793 "rw_ios_per_sec": 0, 00:06:02.793 "rw_mbytes_per_sec": 0, 00:06:02.793 "w_mbytes_per_sec": 0 00:06:02.793 }, 00:06:02.793 "block_size": 512, 00:06:02.793 "claim_type": "exclusive_write", 00:06:02.793 "claimed": true, 00:06:02.793 "driver_specific": {}, 00:06:02.793 "memory_domains": [ 00:06:02.793 { 00:06:02.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.793 "dma_device_type": 2 00:06:02.793 } 00:06:02.793 ], 00:06:02.793 "name": "Malloc0", 00:06:02.793 "num_blocks": 16384, 00:06:02.793 "product_name": "Malloc disk", 00:06:02.793 "supported_io_types": { 00:06:02.793 "abort": true, 00:06:02.793 "compare": false, 00:06:02.793 "compare_and_write": false, 00:06:02.793 "flush": true, 00:06:02.793 "nvme_admin": false, 00:06:02.793 "nvme_io": false, 00:06:02.793 "read": true, 00:06:02.793 "reset": true, 00:06:02.793 "unmap": true, 00:06:02.793 "write": true, 00:06:02.793 "write_zeroes": true 00:06:02.793 }, 00:06:02.793 "uuid": "50f129fc-b781-4862-acda-a550e4aaa1b6", 00:06:02.793 "zoned": false 00:06:02.793 }, 00:06:02.793 { 00:06:02.793 "aliases": [ 00:06:02.793 "ea260d27-d5ee-5ea3-9d29-ca6d070a9015" 00:06:02.793 ], 00:06:02.793 "assigned_rate_limits": { 00:06:02.793 "r_mbytes_per_sec": 0, 00:06:02.793 "rw_ios_per_sec": 0, 00:06:02.793 "rw_mbytes_per_sec": 0, 00:06:02.793 "w_mbytes_per_sec": 0 00:06:02.793 }, 00:06:02.793 "block_size": 512, 00:06:02.793 "claimed": false, 00:06:02.793 "driver_specific": { 00:06:02.793 "passthru": { 00:06:02.793 "base_bdev_name": "Malloc0", 00:06:02.793 "name": "Passthru0" 00:06:02.793 } 00:06:02.793 }, 00:06:02.793 "memory_domains": [ 00:06:02.793 { 00:06:02.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.793 "dma_device_type": 2 00:06:02.793 } 00:06:02.793 ], 00:06:02.793 "name": "Passthru0", 00:06:02.793 "num_blocks": 16384, 00:06:02.793 "product_name": "passthru", 00:06:02.793 "supported_io_types": { 00:06:02.793 "abort": true, 00:06:02.793 "compare": false, 00:06:02.793 "compare_and_write": false, 00:06:02.793 "flush": true, 00:06:02.793 "nvme_admin": false, 00:06:02.793 "nvme_io": false, 00:06:02.793 "read": true, 00:06:02.793 "reset": true, 00:06:02.793 "unmap": true, 00:06:02.793 "write": true, 00:06:02.793 "write_zeroes": true 00:06:02.793 }, 00:06:02.793 "uuid": "ea260d27-d5ee-5ea3-9d29-ca6d070a9015", 00:06:02.793 "zoned": false 00:06:02.793 } 00:06:02.793 ]' 00:06:02.793 14:12:08 -- rpc/rpc.sh@21 -- # jq length 00:06:02.793 14:12:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.793 14:12:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.793 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.793 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.793 14:12:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:02.793 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.793 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.793 14:12:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.793 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.793 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.793 14:12:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.793 14:12:08 -- rpc/rpc.sh@26 -- # jq length 00:06:02.793 ************************************ 00:06:02.793 END TEST rpc_integrity 00:06:02.793 ************************************ 00:06:02.793 14:12:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.793 00:06:02.793 real 0m0.322s 00:06:02.793 user 0m0.202s 00:06:02.793 sys 0m0.044s 00:06:02.793 14:12:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.793 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 14:12:08 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:03.053 14:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.053 14:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 ************************************ 00:06:03.053 START TEST rpc_plugins 00:06:03.053 ************************************ 00:06:03.053 14:12:08 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:03.053 14:12:08 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:03.053 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.053 14:12:08 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:03.053 14:12:08 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:03.053 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.053 14:12:08 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:03.053 { 00:06:03.053 "aliases": [ 00:06:03.053 "6472657a-fd31-4901-a0b0-388d57dc52ef" 00:06:03.053 ], 00:06:03.053 "assigned_rate_limits": { 00:06:03.053 "r_mbytes_per_sec": 0, 00:06:03.053 "rw_ios_per_sec": 0, 00:06:03.053 "rw_mbytes_per_sec": 0, 00:06:03.053 "w_mbytes_per_sec": 0 00:06:03.053 }, 00:06:03.053 "block_size": 4096, 00:06:03.053 "claimed": false, 00:06:03.053 "driver_specific": {}, 00:06:03.053 "memory_domains": [ 00:06:03.053 { 00:06:03.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.053 "dma_device_type": 2 00:06:03.053 } 00:06:03.053 ], 00:06:03.053 "name": "Malloc1", 00:06:03.053 "num_blocks": 256, 00:06:03.053 "product_name": "Malloc disk", 00:06:03.053 "supported_io_types": { 00:06:03.053 "abort": true, 00:06:03.053 "compare": false, 00:06:03.053 "compare_and_write": false, 00:06:03.053 "flush": true, 00:06:03.053 "nvme_admin": false, 00:06:03.053 "nvme_io": false, 00:06:03.053 "read": true, 00:06:03.053 "reset": true, 00:06:03.053 "unmap": true, 00:06:03.053 "write": true, 00:06:03.053 "write_zeroes": true 00:06:03.053 }, 00:06:03.053 "uuid": "6472657a-fd31-4901-a0b0-388d57dc52ef", 00:06:03.053 "zoned": false 00:06:03.053 } 00:06:03.053 ]' 00:06:03.053 14:12:08 -- rpc/rpc.sh@32 -- # jq length 00:06:03.053 14:12:08 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:03.053 14:12:08 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:03.053 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.053 14:12:08 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:03.053 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.053 14:12:08 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:03.053 14:12:08 -- rpc/rpc.sh@36 -- # jq length 00:06:03.053 ************************************ 00:06:03.053 END TEST rpc_plugins 00:06:03.053 ************************************ 00:06:03.053 14:12:08 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:03.053 00:06:03.053 real 0m0.162s 00:06:03.053 user 0m0.107s 00:06:03.053 sys 0m0.018s 00:06:03.053 14:12:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 14:12:08 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:03.053 14:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.053 14:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.053 ************************************ 00:06:03.053 START TEST rpc_trace_cmd_test 00:06:03.053 ************************************ 00:06:03.053 14:12:08 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:03.053 14:12:08 -- rpc/rpc.sh@40 -- # local info 00:06:03.053 14:12:08 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:03.053 14:12:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.053 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.312 14:12:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.312 14:12:08 -- rpc/rpc.sh@42 -- # info='{ 00:06:03.312 "bdev": { 00:06:03.312 "mask": "0x8", 00:06:03.312 "tpoint_mask": "0xffffffffffffffff" 00:06:03.312 }, 00:06:03.312 "bdev_nvme": { 00:06:03.312 "mask": "0x4000", 00:06:03.312 "tpoint_mask": "0x0" 00:06:03.312 }, 00:06:03.312 "blobfs": { 00:06:03.312 "mask": "0x80", 00:06:03.312 "tpoint_mask": "0x0" 00:06:03.312 }, 00:06:03.312 "dsa": { 00:06:03.312 "mask": "0x200", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "ftl": { 00:06:03.313 "mask": "0x40", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "iaa": { 00:06:03.313 "mask": "0x1000", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "iscsi_conn": { 00:06:03.313 "mask": "0x2", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "nvme_pcie": { 00:06:03.313 "mask": "0x800", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "nvme_tcp": { 00:06:03.313 "mask": "0x2000", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "nvmf_rdma": { 00:06:03.313 "mask": "0x10", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "nvmf_tcp": { 00:06:03.313 "mask": "0x20", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "scsi": { 00:06:03.313 "mask": "0x4", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "thread": { 00:06:03.313 "mask": "0x400", 00:06:03.313 "tpoint_mask": "0x0" 00:06:03.313 }, 00:06:03.313 "tpoint_group_mask": "0x8", 00:06:03.313 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67579" 00:06:03.313 }' 00:06:03.313 14:12:08 -- rpc/rpc.sh@43 -- # jq length 00:06:03.313 14:12:08 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:03.313 14:12:08 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:03.313 14:12:08 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:03.313 14:12:08 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:03.313 14:12:08 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:03.313 14:12:08 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:03.313 14:12:08 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:03.313 14:12:08 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:03.572 ************************************ 00:06:03.572 END TEST rpc_trace_cmd_test 00:06:03.572 ************************************ 00:06:03.572 14:12:08 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:03.572 00:06:03.572 real 0m0.279s 00:06:03.572 user 0m0.241s 00:06:03.572 sys 0m0.029s 00:06:03.572 14:12:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.572 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.572 14:12:09 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:03.572 14:12:09 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:03.572 14:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.572 14:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.572 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.572 ************************************ 00:06:03.572 START TEST go_rpc 00:06:03.572 ************************************ 00:06:03.572 14:12:09 -- common/autotest_common.sh@1114 -- # go_rpc 00:06:03.572 14:12:09 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:03.572 14:12:09 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:03.572 14:12:09 -- rpc/rpc.sh@52 -- # jq length 00:06:03.572 14:12:09 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:03.572 14:12:09 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.572 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.572 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.572 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.572 14:12:09 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:03.572 14:12:09 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:03.572 14:12:09 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["023dfee3-9bc7-4827-b65d-52e5f100f7e0"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"023dfee3-9bc7-4827-b65d-52e5f100f7e0","zoned":false}]' 00:06:03.572 14:12:09 -- rpc/rpc.sh@57 -- # jq length 00:06:03.572 14:12:09 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:03.572 14:12:09 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:03.572 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.572 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.572 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.572 14:12:09 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:03.572 14:12:09 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:03.572 14:12:09 -- rpc/rpc.sh@61 -- # jq length 00:06:03.831 14:12:09 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:03.831 00:06:03.831 real 0m0.235s 00:06:03.831 user 0m0.148s 00:06:03.831 sys 0m0.046s 00:06:03.831 14:12:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 ************************************ 00:06:03.831 END TEST go_rpc 00:06:03.831 ************************************ 00:06:03.831 14:12:09 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:03.831 14:12:09 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:03.831 14:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.831 14:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 ************************************ 00:06:03.831 START TEST rpc_daemon_integrity 00:06:03.831 ************************************ 00:06:03.831 14:12:09 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:03.831 14:12:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.831 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.831 14:12:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.831 14:12:09 -- rpc/rpc.sh@13 -- # jq length 00:06:03.831 14:12:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.831 14:12:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.831 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.831 14:12:09 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:03.831 14:12:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.831 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.831 14:12:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.831 { 00:06:03.831 "aliases": [ 00:06:03.831 "5d3abea8-d174-41e7-b6b9-5ed708c033b2" 00:06:03.831 ], 00:06:03.831 "assigned_rate_limits": { 00:06:03.831 "r_mbytes_per_sec": 0, 00:06:03.831 "rw_ios_per_sec": 0, 00:06:03.831 "rw_mbytes_per_sec": 0, 00:06:03.831 "w_mbytes_per_sec": 0 00:06:03.831 }, 00:06:03.831 "block_size": 512, 00:06:03.831 "claimed": false, 00:06:03.831 "driver_specific": {}, 00:06:03.831 "memory_domains": [ 00:06:03.831 { 00:06:03.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.831 "dma_device_type": 2 00:06:03.831 } 00:06:03.831 ], 00:06:03.831 "name": "Malloc3", 00:06:03.831 "num_blocks": 16384, 00:06:03.831 "product_name": "Malloc disk", 00:06:03.831 "supported_io_types": { 00:06:03.831 "abort": true, 00:06:03.831 "compare": false, 00:06:03.831 "compare_and_write": false, 00:06:03.831 "flush": true, 00:06:03.831 "nvme_admin": false, 00:06:03.831 "nvme_io": false, 00:06:03.831 "read": true, 00:06:03.831 "reset": true, 00:06:03.831 "unmap": true, 00:06:03.831 "write": true, 00:06:03.831 "write_zeroes": true 00:06:03.831 }, 00:06:03.831 "uuid": "5d3abea8-d174-41e7-b6b9-5ed708c033b2", 00:06:03.831 "zoned": false 00:06:03.831 } 00:06:03.831 ]' 00:06:03.831 14:12:09 -- rpc/rpc.sh@17 -- # jq length 00:06:03.831 14:12:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.831 14:12:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:03.831 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 [2024-12-05 14:12:09.467829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:03.831 [2024-12-05 14:12:09.467886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.831 [2024-12-05 14:12:09.467903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1500990 00:06:03.831 [2024-12-05 14:12:09.467912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.831 [2024-12-05 14:12:09.469034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.831 [2024-12-05 14:12:09.469081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.831 Passthru0 00:06:03.831 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.831 14:12:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.831 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.831 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.091 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.091 14:12:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.091 { 00:06:04.091 "aliases": [ 00:06:04.091 "5d3abea8-d174-41e7-b6b9-5ed708c033b2" 00:06:04.091 ], 00:06:04.091 "assigned_rate_limits": { 00:06:04.091 "r_mbytes_per_sec": 0, 00:06:04.091 "rw_ios_per_sec": 0, 00:06:04.091 "rw_mbytes_per_sec": 0, 00:06:04.091 "w_mbytes_per_sec": 0 00:06:04.091 }, 00:06:04.091 "block_size": 512, 00:06:04.091 "claim_type": "exclusive_write", 00:06:04.091 "claimed": true, 00:06:04.091 "driver_specific": {}, 00:06:04.091 "memory_domains": [ 00:06:04.091 { 00:06:04.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.091 "dma_device_type": 2 00:06:04.091 } 00:06:04.091 ], 00:06:04.091 "name": "Malloc3", 00:06:04.091 "num_blocks": 16384, 00:06:04.091 "product_name": "Malloc disk", 00:06:04.091 "supported_io_types": { 00:06:04.091 "abort": true, 00:06:04.091 "compare": false, 00:06:04.091 "compare_and_write": false, 00:06:04.091 "flush": true, 00:06:04.091 "nvme_admin": false, 00:06:04.091 "nvme_io": false, 00:06:04.091 "read": true, 00:06:04.091 "reset": true, 00:06:04.091 "unmap": true, 00:06:04.091 "write": true, 00:06:04.091 "write_zeroes": true 00:06:04.091 }, 00:06:04.091 "uuid": "5d3abea8-d174-41e7-b6b9-5ed708c033b2", 00:06:04.091 "zoned": false 00:06:04.091 }, 00:06:04.091 { 00:06:04.091 "aliases": [ 00:06:04.091 "7e49d547-fc34-568c-96ed-6cf0c35d1f7d" 00:06:04.091 ], 00:06:04.091 "assigned_rate_limits": { 00:06:04.091 "r_mbytes_per_sec": 0, 00:06:04.091 "rw_ios_per_sec": 0, 00:06:04.091 "rw_mbytes_per_sec": 0, 00:06:04.091 "w_mbytes_per_sec": 0 00:06:04.091 }, 00:06:04.091 "block_size": 512, 00:06:04.091 "claimed": false, 00:06:04.091 "driver_specific": { 00:06:04.091 "passthru": { 00:06:04.091 "base_bdev_name": "Malloc3", 00:06:04.091 "name": "Passthru0" 00:06:04.091 } 00:06:04.091 }, 00:06:04.091 "memory_domains": [ 00:06:04.091 { 00:06:04.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.091 "dma_device_type": 2 00:06:04.091 } 00:06:04.091 ], 00:06:04.091 "name": "Passthru0", 00:06:04.091 "num_blocks": 16384, 00:06:04.091 "product_name": "passthru", 00:06:04.091 "supported_io_types": { 00:06:04.091 "abort": true, 00:06:04.091 "compare": false, 00:06:04.091 "compare_and_write": false, 00:06:04.091 "flush": true, 00:06:04.091 "nvme_admin": false, 00:06:04.091 "nvme_io": false, 00:06:04.091 "read": true, 00:06:04.091 "reset": true, 00:06:04.091 "unmap": true, 00:06:04.091 "write": true, 00:06:04.091 "write_zeroes": true 00:06:04.091 }, 00:06:04.091 "uuid": "7e49d547-fc34-568c-96ed-6cf0c35d1f7d", 00:06:04.091 "zoned": false 00:06:04.091 } 00:06:04.091 ]' 00:06:04.091 14:12:09 -- rpc/rpc.sh@21 -- # jq length 00:06:04.091 14:12:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.091 14:12:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.091 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.091 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.091 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.091 14:12:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:04.091 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.091 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.091 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.091 14:12:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.091 14:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.091 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.091 14:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.091 14:12:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.091 14:12:09 -- rpc/rpc.sh@26 -- # jq length 00:06:04.091 14:12:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.091 00:06:04.091 real 0m0.320s 00:06:04.091 user 0m0.218s 00:06:04.091 sys 0m0.034s 00:06:04.091 14:12:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.091 14:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.091 ************************************ 00:06:04.091 END TEST rpc_daemon_integrity 00:06:04.091 ************************************ 00:06:04.091 14:12:09 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:04.091 14:12:09 -- rpc/rpc.sh@84 -- # killprocess 67579 00:06:04.091 14:12:09 -- common/autotest_common.sh@936 -- # '[' -z 67579 ']' 00:06:04.091 14:12:09 -- common/autotest_common.sh@940 -- # kill -0 67579 00:06:04.091 14:12:09 -- common/autotest_common.sh@941 -- # uname 00:06:04.091 14:12:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.091 14:12:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67579 00:06:04.091 14:12:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.091 14:12:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.091 killing process with pid 67579 00:06:04.091 14:12:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67579' 00:06:04.091 14:12:09 -- common/autotest_common.sh@955 -- # kill 67579 00:06:04.091 14:12:09 -- common/autotest_common.sh@960 -- # wait 67579 00:06:04.735 00:06:04.735 real 0m3.425s 00:06:04.735 user 0m4.421s 00:06:04.735 sys 0m0.820s 00:06:04.735 ************************************ 00:06:04.735 END TEST rpc 00:06:04.735 ************************************ 00:06:04.735 14:12:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.735 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.735 14:12:10 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:04.735 14:12:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.735 14:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.735 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.735 ************************************ 00:06:04.735 START TEST rpc_client 00:06:04.735 ************************************ 00:06:04.735 14:12:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:04.995 * Looking for test storage... 00:06:04.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:04.995 14:12:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.995 14:12:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.995 14:12:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.995 14:12:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.995 14:12:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.995 14:12:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.995 14:12:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.995 14:12:10 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.995 14:12:10 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.995 14:12:10 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.995 14:12:10 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.995 14:12:10 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.995 14:12:10 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.995 14:12:10 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.995 14:12:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.995 14:12:10 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.995 14:12:10 -- scripts/common.sh@344 -- # : 1 00:06:04.995 14:12:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.995 14:12:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.995 14:12:10 -- scripts/common.sh@364 -- # decimal 1 00:06:04.995 14:12:10 -- scripts/common.sh@352 -- # local d=1 00:06:04.995 14:12:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.995 14:12:10 -- scripts/common.sh@354 -- # echo 1 00:06:04.995 14:12:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.995 14:12:10 -- scripts/common.sh@365 -- # decimal 2 00:06:04.995 14:12:10 -- scripts/common.sh@352 -- # local d=2 00:06:04.995 14:12:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.995 14:12:10 -- scripts/common.sh@354 -- # echo 2 00:06:04.995 14:12:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.995 14:12:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.995 14:12:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.995 14:12:10 -- scripts/common.sh@367 -- # return 0 00:06:04.995 14:12:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.995 14:12:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:12:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:12:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:12:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:12:10 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:04.995 OK 00:06:04.995 14:12:10 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:04.995 00:06:04.995 real 0m0.207s 00:06:04.995 user 0m0.127s 00:06:04.995 sys 0m0.093s 00:06:04.995 14:12:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.995 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.995 ************************************ 00:06:04.995 END TEST rpc_client 00:06:04.995 ************************************ 00:06:04.995 14:12:10 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:04.995 14:12:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.995 14:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.995 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.995 ************************************ 00:06:04.995 START TEST json_config 00:06:04.995 ************************************ 00:06:04.995 14:12:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:04.995 14:12:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.995 14:12:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.995 14:12:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:05.255 14:12:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:05.255 14:12:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:05.255 14:12:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:05.255 14:12:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:05.255 14:12:10 -- scripts/common.sh@335 -- # IFS=.-: 00:06:05.255 14:12:10 -- scripts/common.sh@335 -- # read -ra ver1 00:06:05.255 14:12:10 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.255 14:12:10 -- scripts/common.sh@336 -- # read -ra ver2 00:06:05.255 14:12:10 -- scripts/common.sh@337 -- # local 'op=<' 00:06:05.255 14:12:10 -- scripts/common.sh@339 -- # ver1_l=2 00:06:05.255 14:12:10 -- scripts/common.sh@340 -- # ver2_l=1 00:06:05.255 14:12:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:05.255 14:12:10 -- scripts/common.sh@343 -- # case "$op" in 00:06:05.255 14:12:10 -- scripts/common.sh@344 -- # : 1 00:06:05.255 14:12:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:05.255 14:12:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.255 14:12:10 -- scripts/common.sh@364 -- # decimal 1 00:06:05.255 14:12:10 -- scripts/common.sh@352 -- # local d=1 00:06:05.255 14:12:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.255 14:12:10 -- scripts/common.sh@354 -- # echo 1 00:06:05.255 14:12:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:05.255 14:12:10 -- scripts/common.sh@365 -- # decimal 2 00:06:05.255 14:12:10 -- scripts/common.sh@352 -- # local d=2 00:06:05.255 14:12:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.255 14:12:10 -- scripts/common.sh@354 -- # echo 2 00:06:05.255 14:12:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:05.255 14:12:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:05.255 14:12:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:05.255 14:12:10 -- scripts/common.sh@367 -- # return 0 00:06:05.255 14:12:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.255 14:12:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.255 --rc genhtml_branch_coverage=1 00:06:05.255 --rc genhtml_function_coverage=1 00:06:05.255 --rc genhtml_legend=1 00:06:05.255 --rc geninfo_all_blocks=1 00:06:05.255 --rc geninfo_unexecuted_blocks=1 00:06:05.255 00:06:05.255 ' 00:06:05.255 14:12:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.255 --rc genhtml_branch_coverage=1 00:06:05.255 --rc genhtml_function_coverage=1 00:06:05.255 --rc genhtml_legend=1 00:06:05.255 --rc geninfo_all_blocks=1 00:06:05.255 --rc geninfo_unexecuted_blocks=1 00:06:05.255 00:06:05.255 ' 00:06:05.255 14:12:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.255 --rc genhtml_branch_coverage=1 00:06:05.255 --rc genhtml_function_coverage=1 00:06:05.255 --rc genhtml_legend=1 00:06:05.256 --rc geninfo_all_blocks=1 00:06:05.256 --rc geninfo_unexecuted_blocks=1 00:06:05.256 00:06:05.256 ' 00:06:05.256 14:12:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:05.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.256 --rc genhtml_branch_coverage=1 00:06:05.256 --rc genhtml_function_coverage=1 00:06:05.256 --rc genhtml_legend=1 00:06:05.256 --rc geninfo_all_blocks=1 00:06:05.256 --rc geninfo_unexecuted_blocks=1 00:06:05.256 00:06:05.256 ' 00:06:05.256 14:12:10 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.256 14:12:10 -- nvmf/common.sh@7 -- # uname -s 00:06:05.256 14:12:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.256 14:12:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.256 14:12:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.256 14:12:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.256 14:12:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.256 14:12:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.256 14:12:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.256 14:12:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.256 14:12:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.256 14:12:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.256 14:12:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:06:05.256 14:12:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:06:05.256 14:12:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.256 14:12:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.256 14:12:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.256 14:12:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.256 14:12:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.256 14:12:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.256 14:12:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.256 14:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.256 14:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.256 14:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.256 14:12:10 -- paths/export.sh@5 -- # export PATH 00:06:05.256 14:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.256 14:12:10 -- nvmf/common.sh@46 -- # : 0 00:06:05.256 14:12:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:05.256 14:12:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:05.256 14:12:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:05.256 14:12:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.256 14:12:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.256 14:12:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:05.256 14:12:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:05.256 14:12:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:05.256 14:12:10 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.256 14:12:10 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:05.256 14:12:10 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:05.256 14:12:10 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:05.256 14:12:10 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:05.256 14:12:10 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:05.256 14:12:10 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:05.256 14:12:10 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:05.256 14:12:10 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:05.256 14:12:10 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:05.256 14:12:10 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.256 INFO: JSON configuration test init 00:06:05.256 14:12:10 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:05.256 14:12:10 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:05.256 14:12:10 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:05.256 14:12:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.256 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:05.256 14:12:10 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:05.256 14:12:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.256 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:05.256 14:12:10 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:05.256 14:12:10 -- json_config/json_config.sh@98 -- # local app=target 00:06:05.256 14:12:10 -- json_config/json_config.sh@99 -- # shift 00:06:05.256 14:12:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:05.256 14:12:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:05.256 14:12:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=67901 00:06:05.256 Waiting for target to run... 00:06:05.256 14:12:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:05.256 14:12:10 -- json_config/json_config.sh@114 -- # waitforlisten 67901 /var/tmp/spdk_tgt.sock 00:06:05.256 14:12:10 -- common/autotest_common.sh@829 -- # '[' -z 67901 ']' 00:06:05.256 14:12:10 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:05.256 14:12:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.256 14:12:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.256 14:12:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.256 14:12:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.256 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:05.256 [2024-12-05 14:12:10.787479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.256 [2024-12-05 14:12:10.787568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67901 ] 00:06:05.824 [2024-12-05 14:12:11.310656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.824 [2024-12-05 14:12:11.382418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.824 [2024-12-05 14:12:11.382578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.392 14:12:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.392 14:12:11 -- common/autotest_common.sh@862 -- # return 0 00:06:06.392 00:06:06.392 14:12:11 -- json_config/json_config.sh@115 -- # echo '' 00:06:06.392 14:12:11 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:06.392 14:12:11 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:06.392 14:12:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.392 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.392 14:12:11 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:06.392 14:12:11 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:06.392 14:12:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.392 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.392 14:12:11 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:06.392 14:12:11 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:06.392 14:12:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:06.959 14:12:12 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:06.959 14:12:12 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:06.959 14:12:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.959 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:06.959 14:12:12 -- json_config/json_config.sh@48 -- # local ret=0 00:06:06.959 14:12:12 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:06.959 14:12:12 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:06.959 14:12:12 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:06.959 14:12:12 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:06.959 14:12:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:06.959 14:12:12 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:06.959 14:12:12 -- json_config/json_config.sh@51 -- # local get_types 00:06:06.959 14:12:12 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:06.959 14:12:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.959 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:06.959 14:12:12 -- json_config/json_config.sh@58 -- # return 0 00:06:06.959 14:12:12 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:06.959 14:12:12 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:06.959 14:12:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.959 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:06.959 14:12:12 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:06.959 14:12:12 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:06.959 14:12:12 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:06.959 14:12:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.218 MallocForNvmf0 00:06:07.218 14:12:12 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.218 14:12:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.476 MallocForNvmf1 00:06:07.476 14:12:13 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.476 14:12:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.734 [2024-12-05 14:12:13.323183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.734 14:12:13 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.734 14:12:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.992 14:12:13 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:07.992 14:12:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.250 14:12:13 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.251 14:12:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.509 14:12:13 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.509 14:12:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.509 [2024-12-05 14:12:14.127577] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:08.509 14:12:14 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:08.509 14:12:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.509 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 14:12:14 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:08.766 14:12:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.766 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 14:12:14 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:08.766 14:12:14 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.766 14:12:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.024 MallocBdevForConfigChangeCheck 00:06:09.024 14:12:14 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:09.024 14:12:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.024 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:09.024 14:12:14 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:09.024 14:12:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.281 INFO: shutting down applications... 00:06:09.281 14:12:14 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:09.281 14:12:14 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:09.281 14:12:14 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:09.281 14:12:14 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:09.281 14:12:14 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.540 Calling clear_iscsi_subsystem 00:06:09.540 Calling clear_nvmf_subsystem 00:06:09.540 Calling clear_nbd_subsystem 00:06:09.540 Calling clear_ublk_subsystem 00:06:09.540 Calling clear_vhost_blk_subsystem 00:06:09.540 Calling clear_vhost_scsi_subsystem 00:06:09.540 Calling clear_scheduler_subsystem 00:06:09.540 Calling clear_bdev_subsystem 00:06:09.540 Calling clear_accel_subsystem 00:06:09.540 Calling clear_vmd_subsystem 00:06:09.540 Calling clear_sock_subsystem 00:06:09.540 Calling clear_iobuf_subsystem 00:06:09.540 14:12:15 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:09.540 14:12:15 -- json_config/json_config.sh@396 -- # count=100 00:06:09.540 14:12:15 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:09.540 14:12:15 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.540 14:12:15 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.540 14:12:15 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:10.106 14:12:15 -- json_config/json_config.sh@398 -- # break 00:06:10.106 14:12:15 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:10.106 14:12:15 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:10.106 14:12:15 -- json_config/json_config.sh@120 -- # local app=target 00:06:10.106 14:12:15 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:10.106 14:12:15 -- json_config/json_config.sh@124 -- # [[ -n 67901 ]] 00:06:10.106 14:12:15 -- json_config/json_config.sh@127 -- # kill -SIGINT 67901 00:06:10.106 14:12:15 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:10.106 14:12:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:10.106 14:12:15 -- json_config/json_config.sh@130 -- # kill -0 67901 00:06:10.106 14:12:15 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:10.671 14:12:16 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:10.671 14:12:16 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:10.671 14:12:16 -- json_config/json_config.sh@130 -- # kill -0 67901 00:06:10.671 14:12:16 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:10.671 14:12:16 -- json_config/json_config.sh@132 -- # break 00:06:10.671 14:12:16 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:10.671 SPDK target shutdown done 00:06:10.671 14:12:16 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:10.671 INFO: relaunching applications... 00:06:10.671 14:12:16 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:10.671 14:12:16 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.671 14:12:16 -- json_config/json_config.sh@98 -- # local app=target 00:06:10.671 14:12:16 -- json_config/json_config.sh@99 -- # shift 00:06:10.671 14:12:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:10.671 14:12:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:10.671 14:12:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:10.671 14:12:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:10.671 14:12:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:10.671 14:12:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=68170 00:06:10.671 14:12:16 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.671 Waiting for target to run... 00:06:10.671 14:12:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:10.671 14:12:16 -- json_config/json_config.sh@114 -- # waitforlisten 68170 /var/tmp/spdk_tgt.sock 00:06:10.671 14:12:16 -- common/autotest_common.sh@829 -- # '[' -z 68170 ']' 00:06:10.671 14:12:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.671 14:12:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.671 14:12:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.671 14:12:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.671 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:10.671 [2024-12-05 14:12:16.139330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.671 [2024-12-05 14:12:16.139438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68170 ] 00:06:10.929 [2024-12-05 14:12:16.573090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.186 [2024-12-05 14:12:16.617210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.186 [2024-12-05 14:12:16.617372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.444 [2024-12-05 14:12:16.913816] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.444 [2024-12-05 14:12:16.945899] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:11.444 14:12:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.444 00:06:11.444 14:12:17 -- common/autotest_common.sh@862 -- # return 0 00:06:11.444 14:12:17 -- json_config/json_config.sh@115 -- # echo '' 00:06:11.444 14:12:17 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:11.444 INFO: Checking if target configuration is the same... 00:06:11.444 14:12:17 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:11.444 14:12:17 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.444 14:12:17 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:11.444 14:12:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.444 + '[' 2 -ne 2 ']' 00:06:11.444 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:11.444 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:11.444 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:11.444 +++ basename /dev/fd/62 00:06:11.444 ++ mktemp /tmp/62.XXX 00:06:11.444 + tmp_file_1=/tmp/62.AC4 00:06:11.444 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.444 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.444 + tmp_file_2=/tmp/spdk_tgt_config.json.g0p 00:06:11.444 + ret=0 00:06:11.444 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.008 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.008 + diff -u /tmp/62.AC4 /tmp/spdk_tgt_config.json.g0p 00:06:12.008 INFO: JSON config files are the same 00:06:12.008 + echo 'INFO: JSON config files are the same' 00:06:12.008 + rm /tmp/62.AC4 /tmp/spdk_tgt_config.json.g0p 00:06:12.008 + exit 0 00:06:12.008 14:12:17 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:12.008 INFO: changing configuration and checking if this can be detected... 00:06:12.008 14:12:17 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.008 14:12:17 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.008 14:12:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.266 14:12:17 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.266 14:12:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:12.266 14:12:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.266 + '[' 2 -ne 2 ']' 00:06:12.266 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:12.266 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:12.266 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:12.266 +++ basename /dev/fd/62 00:06:12.266 ++ mktemp /tmp/62.XXX 00:06:12.266 + tmp_file_1=/tmp/62.W08 00:06:12.266 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.266 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.266 + tmp_file_2=/tmp/spdk_tgt_config.json.K4q 00:06:12.266 + ret=0 00:06:12.266 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.524 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.524 + diff -u /tmp/62.W08 /tmp/spdk_tgt_config.json.K4q 00:06:12.524 + ret=1 00:06:12.524 + echo '=== Start of file: /tmp/62.W08 ===' 00:06:12.524 + cat /tmp/62.W08 00:06:12.524 + echo '=== End of file: /tmp/62.W08 ===' 00:06:12.524 + echo '' 00:06:12.524 + echo '=== Start of file: /tmp/spdk_tgt_config.json.K4q ===' 00:06:12.524 + cat /tmp/spdk_tgt_config.json.K4q 00:06:12.783 + echo '=== End of file: /tmp/spdk_tgt_config.json.K4q ===' 00:06:12.783 + echo '' 00:06:12.783 + rm /tmp/62.W08 /tmp/spdk_tgt_config.json.K4q 00:06:12.783 + exit 1 00:06:12.784 INFO: configuration change detected. 00:06:12.784 14:12:18 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:12.784 14:12:18 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:12.784 14:12:18 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:12.784 14:12:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.784 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.784 14:12:18 -- json_config/json_config.sh@360 -- # local ret=0 00:06:12.784 14:12:18 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:12.784 14:12:18 -- json_config/json_config.sh@370 -- # [[ -n 68170 ]] 00:06:12.784 14:12:18 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:12.784 14:12:18 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:12.784 14:12:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.784 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.784 14:12:18 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:12.784 14:12:18 -- json_config/json_config.sh@246 -- # uname -s 00:06:12.784 14:12:18 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:12.784 14:12:18 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:12.784 14:12:18 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:12.784 14:12:18 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:12.784 14:12:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.784 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.784 14:12:18 -- json_config/json_config.sh@376 -- # killprocess 68170 00:06:12.784 14:12:18 -- common/autotest_common.sh@936 -- # '[' -z 68170 ']' 00:06:12.784 14:12:18 -- common/autotest_common.sh@940 -- # kill -0 68170 00:06:12.784 14:12:18 -- common/autotest_common.sh@941 -- # uname 00:06:12.784 14:12:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.784 14:12:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68170 00:06:12.784 14:12:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.784 14:12:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.784 14:12:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68170' 00:06:12.784 killing process with pid 68170 00:06:12.784 14:12:18 -- common/autotest_common.sh@955 -- # kill 68170 00:06:12.784 14:12:18 -- common/autotest_common.sh@960 -- # wait 68170 00:06:13.043 14:12:18 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.043 14:12:18 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:13.043 14:12:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.043 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.043 14:12:18 -- json_config/json_config.sh@381 -- # return 0 00:06:13.043 INFO: Success 00:06:13.043 14:12:18 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:13.043 00:06:13.043 real 0m7.992s 00:06:13.043 user 0m11.050s 00:06:13.043 sys 0m1.936s 00:06:13.043 14:12:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.043 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.043 ************************************ 00:06:13.043 END TEST json_config 00:06:13.043 ************************************ 00:06:13.043 14:12:18 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.043 14:12:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.043 14:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.043 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.043 ************************************ 00:06:13.043 START TEST json_config_extra_key 00:06:13.043 ************************************ 00:06:13.043 14:12:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.043 14:12:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:13.043 14:12:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:13.043 14:12:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:13.302 14:12:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:13.302 14:12:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:13.302 14:12:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:13.302 14:12:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:13.302 14:12:18 -- scripts/common.sh@335 -- # IFS=.-: 00:06:13.302 14:12:18 -- scripts/common.sh@335 -- # read -ra ver1 00:06:13.302 14:12:18 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.302 14:12:18 -- scripts/common.sh@336 -- # read -ra ver2 00:06:13.302 14:12:18 -- scripts/common.sh@337 -- # local 'op=<' 00:06:13.302 14:12:18 -- scripts/common.sh@339 -- # ver1_l=2 00:06:13.302 14:12:18 -- scripts/common.sh@340 -- # ver2_l=1 00:06:13.302 14:12:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:13.302 14:12:18 -- scripts/common.sh@343 -- # case "$op" in 00:06:13.302 14:12:18 -- scripts/common.sh@344 -- # : 1 00:06:13.302 14:12:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:13.302 14:12:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.302 14:12:18 -- scripts/common.sh@364 -- # decimal 1 00:06:13.302 14:12:18 -- scripts/common.sh@352 -- # local d=1 00:06:13.302 14:12:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.302 14:12:18 -- scripts/common.sh@354 -- # echo 1 00:06:13.302 14:12:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:13.302 14:12:18 -- scripts/common.sh@365 -- # decimal 2 00:06:13.302 14:12:18 -- scripts/common.sh@352 -- # local d=2 00:06:13.302 14:12:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.302 14:12:18 -- scripts/common.sh@354 -- # echo 2 00:06:13.302 14:12:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:13.302 14:12:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:13.302 14:12:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:13.302 14:12:18 -- scripts/common.sh@367 -- # return 0 00:06:13.302 14:12:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.302 14:12:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:13.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.302 --rc genhtml_branch_coverage=1 00:06:13.302 --rc genhtml_function_coverage=1 00:06:13.302 --rc genhtml_legend=1 00:06:13.302 --rc geninfo_all_blocks=1 00:06:13.302 --rc geninfo_unexecuted_blocks=1 00:06:13.302 00:06:13.302 ' 00:06:13.302 14:12:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:13.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.302 --rc genhtml_branch_coverage=1 00:06:13.302 --rc genhtml_function_coverage=1 00:06:13.302 --rc genhtml_legend=1 00:06:13.302 --rc geninfo_all_blocks=1 00:06:13.302 --rc geninfo_unexecuted_blocks=1 00:06:13.302 00:06:13.302 ' 00:06:13.302 14:12:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:13.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.302 --rc genhtml_branch_coverage=1 00:06:13.302 --rc genhtml_function_coverage=1 00:06:13.302 --rc genhtml_legend=1 00:06:13.302 --rc geninfo_all_blocks=1 00:06:13.302 --rc geninfo_unexecuted_blocks=1 00:06:13.302 00:06:13.302 ' 00:06:13.302 14:12:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:13.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.302 --rc genhtml_branch_coverage=1 00:06:13.302 --rc genhtml_function_coverage=1 00:06:13.302 --rc genhtml_legend=1 00:06:13.302 --rc geninfo_all_blocks=1 00:06:13.302 --rc geninfo_unexecuted_blocks=1 00:06:13.302 00:06:13.302 ' 00:06:13.302 14:12:18 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.302 14:12:18 -- nvmf/common.sh@7 -- # uname -s 00:06:13.302 14:12:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.302 14:12:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.302 14:12:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.302 14:12:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.302 14:12:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.302 14:12:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.302 14:12:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.302 14:12:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.302 14:12:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.303 14:12:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.303 14:12:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:06:13.303 14:12:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:06:13.303 14:12:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.303 14:12:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.303 14:12:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.303 14:12:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.303 14:12:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.303 14:12:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.303 14:12:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.303 14:12:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.303 14:12:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.303 14:12:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.303 14:12:18 -- paths/export.sh@5 -- # export PATH 00:06:13.303 14:12:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.303 14:12:18 -- nvmf/common.sh@46 -- # : 0 00:06:13.303 14:12:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:13.303 14:12:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:13.303 14:12:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:13.303 14:12:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.303 14:12:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.303 14:12:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:13.303 14:12:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:13.303 14:12:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:13.303 INFO: launching applications... 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.303 Waiting for target to run... 00:06:13.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68353 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68353 /var/tmp/spdk_tgt.sock 00:06:13.303 14:12:18 -- common/autotest_common.sh@829 -- # '[' -z 68353 ']' 00:06:13.303 14:12:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.303 14:12:18 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.303 14:12:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.303 14:12:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.303 14:12:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.303 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.303 [2024-12-05 14:12:18.840300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.303 [2024-12-05 14:12:18.840402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68353 ] 00:06:13.871 [2024-12-05 14:12:19.386735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.871 [2024-12-05 14:12:19.454584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.871 [2024-12-05 14:12:19.454741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.438 00:06:14.438 INFO: shutting down applications... 00:06:14.438 14:12:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.438 14:12:19 -- common/autotest_common.sh@862 -- # return 0 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68353 ]] 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68353 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68353 00:06:14.438 14:12:19 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68353 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:15.005 SPDK target shutdown done 00:06:15.005 14:12:20 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:15.005 Success 00:06:15.005 00:06:15.006 real 0m1.776s 00:06:15.006 user 0m1.547s 00:06:15.006 sys 0m0.575s 00:06:15.006 14:12:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.006 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:15.006 ************************************ 00:06:15.006 END TEST json_config_extra_key 00:06:15.006 ************************************ 00:06:15.006 14:12:20 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.006 14:12:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.006 14:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.006 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:15.006 ************************************ 00:06:15.006 START TEST alias_rpc 00:06:15.006 ************************************ 00:06:15.006 14:12:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.006 * Looking for test storage... 00:06:15.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:15.006 14:12:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.006 14:12:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.006 14:12:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.006 14:12:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.006 14:12:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.006 14:12:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.006 14:12:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.006 14:12:20 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.006 14:12:20 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.006 14:12:20 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.006 14:12:20 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.006 14:12:20 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.006 14:12:20 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.006 14:12:20 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.006 14:12:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.006 14:12:20 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.006 14:12:20 -- scripts/common.sh@344 -- # : 1 00:06:15.006 14:12:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.006 14:12:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.006 14:12:20 -- scripts/common.sh@364 -- # decimal 1 00:06:15.006 14:12:20 -- scripts/common.sh@352 -- # local d=1 00:06:15.006 14:12:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.006 14:12:20 -- scripts/common.sh@354 -- # echo 1 00:06:15.006 14:12:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.006 14:12:20 -- scripts/common.sh@365 -- # decimal 2 00:06:15.006 14:12:20 -- scripts/common.sh@352 -- # local d=2 00:06:15.006 14:12:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.006 14:12:20 -- scripts/common.sh@354 -- # echo 2 00:06:15.006 14:12:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.006 14:12:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.006 14:12:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.006 14:12:20 -- scripts/common.sh@367 -- # return 0 00:06:15.006 14:12:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.006 14:12:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.006 --rc genhtml_branch_coverage=1 00:06:15.006 --rc genhtml_function_coverage=1 00:06:15.006 --rc genhtml_legend=1 00:06:15.006 --rc geninfo_all_blocks=1 00:06:15.006 --rc geninfo_unexecuted_blocks=1 00:06:15.006 00:06:15.006 ' 00:06:15.006 14:12:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.006 --rc genhtml_branch_coverage=1 00:06:15.006 --rc genhtml_function_coverage=1 00:06:15.006 --rc genhtml_legend=1 00:06:15.006 --rc geninfo_all_blocks=1 00:06:15.006 --rc geninfo_unexecuted_blocks=1 00:06:15.006 00:06:15.006 ' 00:06:15.006 14:12:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.006 --rc genhtml_branch_coverage=1 00:06:15.006 --rc genhtml_function_coverage=1 00:06:15.006 --rc genhtml_legend=1 00:06:15.006 --rc geninfo_all_blocks=1 00:06:15.006 --rc geninfo_unexecuted_blocks=1 00:06:15.006 00:06:15.006 ' 00:06:15.006 14:12:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.006 --rc genhtml_branch_coverage=1 00:06:15.006 --rc genhtml_function_coverage=1 00:06:15.006 --rc genhtml_legend=1 00:06:15.006 --rc geninfo_all_blocks=1 00:06:15.006 --rc geninfo_unexecuted_blocks=1 00:06:15.006 00:06:15.006 ' 00:06:15.006 14:12:20 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.006 14:12:20 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68442 00:06:15.006 14:12:20 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.006 14:12:20 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68442 00:06:15.006 14:12:20 -- common/autotest_common.sh@829 -- # '[' -z 68442 ']' 00:06:15.006 14:12:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.006 14:12:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.006 14:12:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.006 14:12:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.006 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:15.006 [2024-12-05 14:12:20.651618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.006 [2024-12-05 14:12:20.651721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68442 ] 00:06:15.265 [2024-12-05 14:12:20.790532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.265 [2024-12-05 14:12:20.846915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.265 [2024-12-05 14:12:20.847109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.200 14:12:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.200 14:12:21 -- common/autotest_common.sh@862 -- # return 0 00:06:16.200 14:12:21 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:16.200 14:12:21 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68442 00:06:16.200 14:12:21 -- common/autotest_common.sh@936 -- # '[' -z 68442 ']' 00:06:16.200 14:12:21 -- common/autotest_common.sh@940 -- # kill -0 68442 00:06:16.200 14:12:21 -- common/autotest_common.sh@941 -- # uname 00:06:16.200 14:12:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.200 14:12:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68442 00:06:16.458 14:12:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.458 14:12:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.458 killing process with pid 68442 00:06:16.458 14:12:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68442' 00:06:16.458 14:12:21 -- common/autotest_common.sh@955 -- # kill 68442 00:06:16.458 14:12:21 -- common/autotest_common.sh@960 -- # wait 68442 00:06:16.717 00:06:16.717 real 0m1.784s 00:06:16.717 user 0m1.955s 00:06:16.717 sys 0m0.454s 00:06:16.717 14:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.717 14:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.717 ************************************ 00:06:16.717 END TEST alias_rpc 00:06:16.717 ************************************ 00:06:16.717 14:12:22 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:06:16.717 14:12:22 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.717 14:12:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.717 14:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.717 14:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.717 ************************************ 00:06:16.717 START TEST dpdk_mem_utility 00:06:16.717 ************************************ 00:06:16.717 14:12:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.717 * Looking for test storage... 00:06:16.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:16.717 14:12:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:16.717 14:12:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:16.717 14:12:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:16.976 14:12:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:16.976 14:12:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:16.976 14:12:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:16.976 14:12:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:16.976 14:12:22 -- scripts/common.sh@335 -- # IFS=.-: 00:06:16.976 14:12:22 -- scripts/common.sh@335 -- # read -ra ver1 00:06:16.976 14:12:22 -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.976 14:12:22 -- scripts/common.sh@336 -- # read -ra ver2 00:06:16.976 14:12:22 -- scripts/common.sh@337 -- # local 'op=<' 00:06:16.976 14:12:22 -- scripts/common.sh@339 -- # ver1_l=2 00:06:16.976 14:12:22 -- scripts/common.sh@340 -- # ver2_l=1 00:06:16.976 14:12:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:16.976 14:12:22 -- scripts/common.sh@343 -- # case "$op" in 00:06:16.976 14:12:22 -- scripts/common.sh@344 -- # : 1 00:06:16.976 14:12:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:16.976 14:12:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.976 14:12:22 -- scripts/common.sh@364 -- # decimal 1 00:06:16.976 14:12:22 -- scripts/common.sh@352 -- # local d=1 00:06:16.976 14:12:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.976 14:12:22 -- scripts/common.sh@354 -- # echo 1 00:06:16.976 14:12:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:16.976 14:12:22 -- scripts/common.sh@365 -- # decimal 2 00:06:16.976 14:12:22 -- scripts/common.sh@352 -- # local d=2 00:06:16.976 14:12:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.976 14:12:22 -- scripts/common.sh@354 -- # echo 2 00:06:16.976 14:12:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:16.976 14:12:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:16.976 14:12:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:16.976 14:12:22 -- scripts/common.sh@367 -- # return 0 00:06:16.976 14:12:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.976 14:12:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:16.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.976 --rc genhtml_branch_coverage=1 00:06:16.976 --rc genhtml_function_coverage=1 00:06:16.976 --rc genhtml_legend=1 00:06:16.976 --rc geninfo_all_blocks=1 00:06:16.976 --rc geninfo_unexecuted_blocks=1 00:06:16.976 00:06:16.976 ' 00:06:16.976 14:12:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:16.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.976 --rc genhtml_branch_coverage=1 00:06:16.976 --rc genhtml_function_coverage=1 00:06:16.976 --rc genhtml_legend=1 00:06:16.976 --rc geninfo_all_blocks=1 00:06:16.976 --rc geninfo_unexecuted_blocks=1 00:06:16.976 00:06:16.976 ' 00:06:16.976 14:12:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:16.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.976 --rc genhtml_branch_coverage=1 00:06:16.976 --rc genhtml_function_coverage=1 00:06:16.976 --rc genhtml_legend=1 00:06:16.976 --rc geninfo_all_blocks=1 00:06:16.976 --rc geninfo_unexecuted_blocks=1 00:06:16.976 00:06:16.976 ' 00:06:16.976 14:12:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:16.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.976 --rc genhtml_branch_coverage=1 00:06:16.976 --rc genhtml_function_coverage=1 00:06:16.976 --rc genhtml_legend=1 00:06:16.976 --rc geninfo_all_blocks=1 00:06:16.976 --rc geninfo_unexecuted_blocks=1 00:06:16.976 00:06:16.976 ' 00:06:16.976 14:12:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.976 14:12:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68535 00:06:16.976 14:12:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68535 00:06:16.976 14:12:22 -- common/autotest_common.sh@829 -- # '[' -z 68535 ']' 00:06:16.976 14:12:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.976 14:12:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.976 14:12:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.976 14:12:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.976 14:12:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.976 14:12:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.976 [2024-12-05 14:12:22.512255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.976 [2024-12-05 14:12:22.512358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68535 ] 00:06:17.235 [2024-12-05 14:12:22.647766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.235 [2024-12-05 14:12:22.703929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.235 [2024-12-05 14:12:22.704089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.173 14:12:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.173 14:12:23 -- common/autotest_common.sh@862 -- # return 0 00:06:18.173 14:12:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.173 14:12:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.173 14:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.173 14:12:23 -- common/autotest_common.sh@10 -- # set +x 00:06:18.173 { 00:06:18.173 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.173 } 00:06:18.173 14:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.173 14:12:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.173 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:18.173 1 heaps totaling size 814.000000 MiB 00:06:18.173 size: 814.000000 MiB heap id: 0 00:06:18.173 end heaps---------- 00:06:18.173 8 mempools totaling size 598.116089 MiB 00:06:18.173 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.173 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.173 size: 84.521057 MiB name: bdev_io_68535 00:06:18.173 size: 51.011292 MiB name: evtpool_68535 00:06:18.173 size: 50.003479 MiB name: msgpool_68535 00:06:18.173 size: 21.763794 MiB name: PDU_Pool 00:06:18.173 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.173 size: 0.026123 MiB name: Session_Pool 00:06:18.173 end mempools------- 00:06:18.173 6 memzones totaling size 4.142822 MiB 00:06:18.173 size: 1.000366 MiB name: RG_ring_0_68535 00:06:18.173 size: 1.000366 MiB name: RG_ring_1_68535 00:06:18.173 size: 1.000366 MiB name: RG_ring_4_68535 00:06:18.173 size: 1.000366 MiB name: RG_ring_5_68535 00:06:18.173 size: 0.125366 MiB name: RG_ring_2_68535 00:06:18.173 size: 0.015991 MiB name: RG_ring_3_68535 00:06:18.173 end memzones------- 00:06:18.173 14:12:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.173 heap id: 0 total size: 814.000000 MiB number of busy elements: 226 number of free elements: 15 00:06:18.173 list of free elements. size: 12.485474 MiB 00:06:18.173 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:18.173 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:18.173 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:18.173 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:18.173 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:18.173 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:18.173 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:18.173 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:18.173 element at address: 0x200000200000 with size: 0.837219 MiB 00:06:18.173 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:06:18.173 element at address: 0x20000b200000 with size: 0.489258 MiB 00:06:18.173 element at address: 0x200000800000 with size: 0.486877 MiB 00:06:18.173 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:18.173 element at address: 0x200027e00000 with size: 0.397949 MiB 00:06:18.173 element at address: 0x200003a00000 with size: 0.351501 MiB 00:06:18.173 list of standard malloc elements. size: 199.251953 MiB 00:06:18.173 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:18.173 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:18.173 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:18.173 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:18.173 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:18.173 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.173 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:18.173 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.173 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:18.173 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:18.173 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:18.173 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:18.174 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:18.174 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:18.174 list of memzone associated elements. size: 602.262573 MiB 00:06:18.174 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:18.174 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.174 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:18.174 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.174 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:18.174 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68535_0 00:06:18.174 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:18.174 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68535_0 00:06:18.174 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:18.174 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68535_0 00:06:18.174 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:18.174 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.174 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:18.174 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.174 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:18.174 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68535 00:06:18.174 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:18.174 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68535 00:06:18.174 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.174 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68535 00:06:18.174 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:18.175 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.175 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:18.175 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.175 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:18.175 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.175 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:18.175 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.175 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:18.175 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68535 00:06:18.175 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:18.175 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68535 00:06:18.175 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:18.175 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68535 00:06:18.175 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:18.175 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68535 00:06:18.175 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:18.175 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68535 00:06:18.175 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:18.175 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.175 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:18.175 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.175 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:18.175 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.175 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:18.175 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68535 00:06:18.175 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:18.175 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.175 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:06:18.175 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.175 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:18.175 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68535 00:06:18.175 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:06:18.175 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.175 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:18.175 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68535 00:06:18.175 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:18.175 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68535 00:06:18.175 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:06:18.175 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.175 14:12:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.175 14:12:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68535 00:06:18.175 14:12:23 -- common/autotest_common.sh@936 -- # '[' -z 68535 ']' 00:06:18.175 14:12:23 -- common/autotest_common.sh@940 -- # kill -0 68535 00:06:18.175 14:12:23 -- common/autotest_common.sh@941 -- # uname 00:06:18.175 14:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.175 14:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68535 00:06:18.175 killing process with pid 68535 00:06:18.175 14:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.175 14:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.175 14:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68535' 00:06:18.175 14:12:23 -- common/autotest_common.sh@955 -- # kill 68535 00:06:18.175 14:12:23 -- common/autotest_common.sh@960 -- # wait 68535 00:06:18.433 00:06:18.433 real 0m1.747s 00:06:18.433 user 0m1.863s 00:06:18.433 sys 0m0.462s 00:06:18.433 ************************************ 00:06:18.433 END TEST dpdk_mem_utility 00:06:18.433 ************************************ 00:06:18.433 14:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.433 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:06:18.433 14:12:24 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.433 14:12:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.433 14:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.433 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:06:18.433 ************************************ 00:06:18.433 START TEST event 00:06:18.433 ************************************ 00:06:18.433 14:12:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.691 * Looking for test storage... 00:06:18.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.691 14:12:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:18.691 14:12:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:18.691 14:12:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:18.691 14:12:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:18.691 14:12:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:18.691 14:12:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:18.691 14:12:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:18.691 14:12:24 -- scripts/common.sh@335 -- # IFS=.-: 00:06:18.691 14:12:24 -- scripts/common.sh@335 -- # read -ra ver1 00:06:18.691 14:12:24 -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.691 14:12:24 -- scripts/common.sh@336 -- # read -ra ver2 00:06:18.691 14:12:24 -- scripts/common.sh@337 -- # local 'op=<' 00:06:18.691 14:12:24 -- scripts/common.sh@339 -- # ver1_l=2 00:06:18.691 14:12:24 -- scripts/common.sh@340 -- # ver2_l=1 00:06:18.691 14:12:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:18.691 14:12:24 -- scripts/common.sh@343 -- # case "$op" in 00:06:18.691 14:12:24 -- scripts/common.sh@344 -- # : 1 00:06:18.691 14:12:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:18.691 14:12:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.691 14:12:24 -- scripts/common.sh@364 -- # decimal 1 00:06:18.691 14:12:24 -- scripts/common.sh@352 -- # local d=1 00:06:18.691 14:12:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.691 14:12:24 -- scripts/common.sh@354 -- # echo 1 00:06:18.691 14:12:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:18.691 14:12:24 -- scripts/common.sh@365 -- # decimal 2 00:06:18.691 14:12:24 -- scripts/common.sh@352 -- # local d=2 00:06:18.691 14:12:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.691 14:12:24 -- scripts/common.sh@354 -- # echo 2 00:06:18.691 14:12:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:18.691 14:12:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:18.691 14:12:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:18.691 14:12:24 -- scripts/common.sh@367 -- # return 0 00:06:18.691 14:12:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.691 14:12:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.691 --rc genhtml_branch_coverage=1 00:06:18.691 --rc genhtml_function_coverage=1 00:06:18.691 --rc genhtml_legend=1 00:06:18.691 --rc geninfo_all_blocks=1 00:06:18.691 --rc geninfo_unexecuted_blocks=1 00:06:18.691 00:06:18.691 ' 00:06:18.691 14:12:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.691 --rc genhtml_branch_coverage=1 00:06:18.691 --rc genhtml_function_coverage=1 00:06:18.691 --rc genhtml_legend=1 00:06:18.691 --rc geninfo_all_blocks=1 00:06:18.691 --rc geninfo_unexecuted_blocks=1 00:06:18.691 00:06:18.691 ' 00:06:18.691 14:12:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.691 --rc genhtml_branch_coverage=1 00:06:18.691 --rc genhtml_function_coverage=1 00:06:18.691 --rc genhtml_legend=1 00:06:18.691 --rc geninfo_all_blocks=1 00:06:18.691 --rc geninfo_unexecuted_blocks=1 00:06:18.691 00:06:18.691 ' 00:06:18.691 14:12:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.692 --rc genhtml_branch_coverage=1 00:06:18.692 --rc genhtml_function_coverage=1 00:06:18.692 --rc genhtml_legend=1 00:06:18.692 --rc geninfo_all_blocks=1 00:06:18.692 --rc geninfo_unexecuted_blocks=1 00:06:18.692 00:06:18.692 ' 00:06:18.692 14:12:24 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.692 14:12:24 -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.692 14:12:24 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.692 14:12:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:18.692 14:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.692 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:06:18.692 ************************************ 00:06:18.692 START TEST event_perf 00:06:18.692 ************************************ 00:06:18.692 14:12:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.692 Running I/O for 1 seconds...[2024-12-05 14:12:24.280977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.692 [2024-12-05 14:12:24.281219] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68632 ] 00:06:18.950 [2024-12-05 14:12:24.419006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.950 [2024-12-05 14:12:24.480532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.950 [2024-12-05 14:12:24.480674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.950 [2024-12-05 14:12:24.480796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.950 [2024-12-05 14:12:24.480797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.324 Running I/O for 1 seconds... 00:06:20.324 lcore 0: 166191 00:06:20.324 lcore 1: 166189 00:06:20.324 lcore 2: 166190 00:06:20.324 lcore 3: 166190 00:06:20.324 done. 00:06:20.324 00:06:20.324 real 0m1.286s 00:06:20.324 user 0m4.102s 00:06:20.324 sys 0m0.060s 00:06:20.324 14:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.324 ************************************ 00:06:20.324 END TEST event_perf 00:06:20.324 ************************************ 00:06:20.324 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.324 14:12:25 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.324 14:12:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:20.324 14:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.324 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.324 ************************************ 00:06:20.324 START TEST event_reactor 00:06:20.324 ************************************ 00:06:20.324 14:12:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.324 [2024-12-05 14:12:25.622251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.324 [2024-12-05 14:12:25.622337] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68676 ] 00:06:20.324 [2024-12-05 14:12:25.756361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.324 [2024-12-05 14:12:25.811203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.259 test_start 00:06:21.259 oneshot 00:06:21.259 tick 100 00:06:21.259 tick 100 00:06:21.259 tick 250 00:06:21.259 tick 100 00:06:21.259 tick 100 00:06:21.259 tick 250 00:06:21.259 tick 100 00:06:21.259 tick 500 00:06:21.259 tick 100 00:06:21.259 tick 100 00:06:21.259 tick 250 00:06:21.259 tick 100 00:06:21.259 tick 100 00:06:21.259 test_end 00:06:21.259 00:06:21.259 real 0m1.259s 00:06:21.259 user 0m1.100s 00:06:21.259 sys 0m0.054s 00:06:21.259 14:12:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.259 ************************************ 00:06:21.259 END TEST event_reactor 00:06:21.259 ************************************ 00:06:21.259 14:12:26 -- common/autotest_common.sh@10 -- # set +x 00:06:21.518 14:12:26 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.518 14:12:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:21.518 14:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.518 14:12:26 -- common/autotest_common.sh@10 -- # set +x 00:06:21.518 ************************************ 00:06:21.518 START TEST event_reactor_perf 00:06:21.518 ************************************ 00:06:21.518 14:12:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.518 [2024-12-05 14:12:26.931603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.518 [2024-12-05 14:12:26.931679] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68706 ] 00:06:21.518 [2024-12-05 14:12:27.060040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.518 [2024-12-05 14:12:27.112265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.897 test_start 00:06:22.897 test_end 00:06:22.897 Performance: 471937 events per second 00:06:22.897 ************************************ 00:06:22.897 END TEST event_reactor_perf 00:06:22.897 ************************************ 00:06:22.897 00:06:22.897 real 0m1.248s 00:06:22.897 user 0m1.089s 00:06:22.897 sys 0m0.054s 00:06:22.897 14:12:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.897 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.897 14:12:28 -- event/event.sh@49 -- # uname -s 00:06:22.897 14:12:28 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:22.897 14:12:28 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.897 14:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.897 14:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.897 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.897 ************************************ 00:06:22.897 START TEST event_scheduler 00:06:22.897 ************************************ 00:06:22.897 14:12:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.897 * Looking for test storage... 00:06:22.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:22.897 14:12:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:22.897 14:12:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:22.897 14:12:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:22.897 14:12:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:22.897 14:12:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:22.897 14:12:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:22.897 14:12:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:22.897 14:12:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:22.897 14:12:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:22.897 14:12:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.897 14:12:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:22.897 14:12:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:22.897 14:12:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:22.897 14:12:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:22.897 14:12:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:22.897 14:12:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:22.897 14:12:28 -- scripts/common.sh@344 -- # : 1 00:06:22.897 14:12:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:22.897 14:12:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.897 14:12:28 -- scripts/common.sh@364 -- # decimal 1 00:06:22.897 14:12:28 -- scripts/common.sh@352 -- # local d=1 00:06:22.897 14:12:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.897 14:12:28 -- scripts/common.sh@354 -- # echo 1 00:06:22.897 14:12:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:22.897 14:12:28 -- scripts/common.sh@365 -- # decimal 2 00:06:22.897 14:12:28 -- scripts/common.sh@352 -- # local d=2 00:06:22.897 14:12:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.897 14:12:28 -- scripts/common.sh@354 -- # echo 2 00:06:22.897 14:12:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:22.897 14:12:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:22.897 14:12:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:22.897 14:12:28 -- scripts/common.sh@367 -- # return 0 00:06:22.897 14:12:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.897 14:12:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:22.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.897 --rc genhtml_branch_coverage=1 00:06:22.897 --rc genhtml_function_coverage=1 00:06:22.897 --rc genhtml_legend=1 00:06:22.897 --rc geninfo_all_blocks=1 00:06:22.897 --rc geninfo_unexecuted_blocks=1 00:06:22.897 00:06:22.897 ' 00:06:22.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.897 14:12:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:22.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.897 --rc genhtml_branch_coverage=1 00:06:22.897 --rc genhtml_function_coverage=1 00:06:22.897 --rc genhtml_legend=1 00:06:22.897 --rc geninfo_all_blocks=1 00:06:22.897 --rc geninfo_unexecuted_blocks=1 00:06:22.897 00:06:22.897 ' 00:06:22.897 14:12:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:22.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.897 --rc genhtml_branch_coverage=1 00:06:22.897 --rc genhtml_function_coverage=1 00:06:22.897 --rc genhtml_legend=1 00:06:22.897 --rc geninfo_all_blocks=1 00:06:22.897 --rc geninfo_unexecuted_blocks=1 00:06:22.897 00:06:22.897 ' 00:06:22.897 14:12:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:22.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.897 --rc genhtml_branch_coverage=1 00:06:22.897 --rc genhtml_function_coverage=1 00:06:22.897 --rc genhtml_legend=1 00:06:22.897 --rc geninfo_all_blocks=1 00:06:22.897 --rc geninfo_unexecuted_blocks=1 00:06:22.897 00:06:22.897 ' 00:06:22.897 14:12:28 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.897 14:12:28 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68769 00:06:22.897 14:12:28 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.897 14:12:28 -- scheduler/scheduler.sh@37 -- # waitforlisten 68769 00:06:22.897 14:12:28 -- common/autotest_common.sh@829 -- # '[' -z 68769 ']' 00:06:22.897 14:12:28 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.897 14:12:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.897 14:12:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.897 14:12:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.897 14:12:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.897 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.897 [2024-12-05 14:12:28.473413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.897 [2024-12-05 14:12:28.473698] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68769 ] 00:06:23.156 [2024-12-05 14:12:28.605136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.156 [2024-12-05 14:12:28.675610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.156 [2024-12-05 14:12:28.675755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.156 [2024-12-05 14:12:28.675880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.156 [2024-12-05 14:12:28.675879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.156 14:12:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.156 14:12:28 -- common/autotest_common.sh@862 -- # return 0 00:06:23.156 14:12:28 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.156 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.156 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.156 POWER: Env isn't set yet! 00:06:23.156 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:23.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.156 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.156 POWER: Attempting to initialise PSTAT power management... 00:06:23.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.156 POWER: Cannot set governor of lcore 0 to performance 00:06:23.156 POWER: Attempting to initialise AMD PSTATE power management... 00:06:23.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.156 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.156 POWER: Attempting to initialise CPPC power management... 00:06:23.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.156 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.157 POWER: Attempting to initialise VM power management... 00:06:23.157 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:23.157 POWER: Unable to set Power Management Environment for lcore 0 00:06:23.157 [2024-12-05 14:12:28.731346] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:23.157 [2024-12-05 14:12:28.731359] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:23.157 [2024-12-05 14:12:28.731368] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.157 [2024-12-05 14:12:28.731379] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.157 [2024-12-05 14:12:28.731387] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.157 [2024-12-05 14:12:28.731394] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.157 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.157 14:12:28 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.157 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.157 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.415 [2024-12-05 14:12:28.818666] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.415 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.415 14:12:28 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.415 14:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.415 14:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.415 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 ************************************ 00:06:23.416 START TEST scheduler_create_thread 00:06:23.416 ************************************ 00:06:23.416 14:12:28 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 2 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 3 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 4 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 5 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 6 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 7 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 8 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 9 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 10 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.416 14:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.416 14:12:28 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.416 14:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.416 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:24.794 14:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.794 14:12:30 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.794 14:12:30 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.794 14:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.794 14:12:30 -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 ************************************ 00:06:26.168 END TEST scheduler_create_thread 00:06:26.168 ************************************ 00:06:26.168 14:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.168 00:06:26.168 real 0m2.613s 00:06:26.168 user 0m0.018s 00:06:26.168 sys 0m0.004s 00:06:26.168 14:12:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.168 14:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 14:12:31 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.168 14:12:31 -- scheduler/scheduler.sh@46 -- # killprocess 68769 00:06:26.168 14:12:31 -- common/autotest_common.sh@936 -- # '[' -z 68769 ']' 00:06:26.168 14:12:31 -- common/autotest_common.sh@940 -- # kill -0 68769 00:06:26.168 14:12:31 -- common/autotest_common.sh@941 -- # uname 00:06:26.168 14:12:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.168 14:12:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68769 00:06:26.168 killing process with pid 68769 00:06:26.168 14:12:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:26.168 14:12:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:26.168 14:12:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68769' 00:06:26.168 14:12:31 -- common/autotest_common.sh@955 -- # kill 68769 00:06:26.168 14:12:31 -- common/autotest_common.sh@960 -- # wait 68769 00:06:26.426 [2024-12-05 14:12:31.924974] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.683 ************************************ 00:06:26.683 END TEST event_scheduler 00:06:26.683 ************************************ 00:06:26.683 00:06:26.683 real 0m3.897s 00:06:26.683 user 0m5.698s 00:06:26.683 sys 0m0.358s 00:06:26.683 14:12:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.683 14:12:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.683 14:12:32 -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.683 14:12:32 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.684 14:12:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.684 14:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.684 14:12:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.684 ************************************ 00:06:26.684 START TEST app_repeat 00:06:26.684 ************************************ 00:06:26.684 14:12:32 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:26.684 14:12:32 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.684 14:12:32 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.684 14:12:32 -- event/event.sh@13 -- # local nbd_list 00:06:26.684 14:12:32 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.684 14:12:32 -- event/event.sh@14 -- # local bdev_list 00:06:26.684 14:12:32 -- event/event.sh@15 -- # local repeat_times=4 00:06:26.684 14:12:32 -- event/event.sh@17 -- # modprobe nbd 00:06:26.684 14:12:32 -- event/event.sh@19 -- # repeat_pid=68873 00:06:26.684 14:12:32 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.684 Process app_repeat pid: 68873 00:06:26.684 14:12:32 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68873' 00:06:26.684 14:12:32 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.684 spdk_app_start Round 0 00:06:26.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.684 14:12:32 -- event/event.sh@23 -- # for i in {0..2} 00:06:26.684 14:12:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.684 14:12:32 -- event/event.sh@25 -- # waitforlisten 68873 /var/tmp/spdk-nbd.sock 00:06:26.684 14:12:32 -- common/autotest_common.sh@829 -- # '[' -z 68873 ']' 00:06:26.684 14:12:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.684 14:12:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.684 14:12:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.684 14:12:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.684 14:12:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.684 [2024-12-05 14:12:32.208474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.684 [2024-12-05 14:12:32.208744] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68873 ] 00:06:26.942 [2024-12-05 14:12:32.339161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.942 [2024-12-05 14:12:32.409352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.942 [2024-12-05 14:12:32.409367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.878 14:12:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.878 14:12:33 -- common/autotest_common.sh@862 -- # return 0 00:06:27.878 14:12:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.878 Malloc0 00:06:27.878 14:12:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.138 Malloc1 00:06:28.138 14:12:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@12 -- # local i 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.138 14:12:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.396 /dev/nbd0 00:06:28.397 14:12:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.397 14:12:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.397 14:12:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:28.397 14:12:33 -- common/autotest_common.sh@867 -- # local i 00:06:28.397 14:12:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.397 14:12:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.397 14:12:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:28.397 14:12:33 -- common/autotest_common.sh@871 -- # break 00:06:28.397 14:12:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.397 14:12:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.397 14:12:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.397 1+0 records in 00:06:28.397 1+0 records out 00:06:28.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236698 s, 17.3 MB/s 00:06:28.397 14:12:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.397 14:12:33 -- common/autotest_common.sh@884 -- # size=4096 00:06:28.397 14:12:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.397 14:12:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.397 14:12:33 -- common/autotest_common.sh@887 -- # return 0 00:06:28.397 14:12:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.397 14:12:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.397 14:12:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.656 /dev/nbd1 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.656 14:12:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.656 14:12:34 -- common/autotest_common.sh@867 -- # local i 00:06:28.656 14:12:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.656 14:12:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.656 14:12:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.656 14:12:34 -- common/autotest_common.sh@871 -- # break 00:06:28.656 14:12:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.656 14:12:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.656 14:12:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.656 1+0 records in 00:06:28.656 1+0 records out 00:06:28.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355048 s, 11.5 MB/s 00:06:28.656 14:12:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.656 14:12:34 -- common/autotest_common.sh@884 -- # size=4096 00:06:28.656 14:12:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.656 14:12:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.656 14:12:34 -- common/autotest_common.sh@887 -- # return 0 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.656 14:12:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.916 { 00:06:28.916 "bdev_name": "Malloc0", 00:06:28.916 "nbd_device": "/dev/nbd0" 00:06:28.916 }, 00:06:28.916 { 00:06:28.916 "bdev_name": "Malloc1", 00:06:28.916 "nbd_device": "/dev/nbd1" 00:06:28.916 } 00:06:28.916 ]' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.916 { 00:06:28.916 "bdev_name": "Malloc0", 00:06:28.916 "nbd_device": "/dev/nbd0" 00:06:28.916 }, 00:06:28.916 { 00:06:28.916 "bdev_name": "Malloc1", 00:06:28.916 "nbd_device": "/dev/nbd1" 00:06:28.916 } 00:06:28.916 ]' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.916 /dev/nbd1' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.916 /dev/nbd1' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.916 256+0 records in 00:06:28.916 256+0 records out 00:06:28.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00778589 s, 135 MB/s 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.916 256+0 records in 00:06:28.916 256+0 records out 00:06:28.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235408 s, 44.5 MB/s 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.916 256+0 records in 00:06:28.916 256+0 records out 00:06:28.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266833 s, 39.3 MB/s 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@51 -- # local i 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.916 14:12:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@41 -- # break 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.484 14:12:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@41 -- # break 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.742 14:12:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@65 -- # true 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.000 14:12:35 -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.000 14:12:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.259 14:12:35 -- event/event.sh@35 -- # sleep 3 00:06:30.518 [2024-12-05 14:12:35.991858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.518 [2024-12-05 14:12:36.044048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.518 [2024-12-05 14:12:36.044087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.518 [2024-12-05 14:12:36.118562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.518 [2024-12-05 14:12:36.118638] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.819 14:12:38 -- event/event.sh@23 -- # for i in {0..2} 00:06:33.819 spdk_app_start Round 1 00:06:33.819 14:12:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:33.819 14:12:38 -- event/event.sh@25 -- # waitforlisten 68873 /var/tmp/spdk-nbd.sock 00:06:33.819 14:12:38 -- common/autotest_common.sh@829 -- # '[' -z 68873 ']' 00:06:33.819 14:12:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.819 14:12:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.819 14:12:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.819 14:12:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.819 14:12:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.819 14:12:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.819 14:12:38 -- common/autotest_common.sh@862 -- # return 0 00:06:33.819 14:12:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.819 Malloc0 00:06:33.819 14:12:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.113 Malloc1 00:06:34.113 14:12:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@12 -- # local i 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.113 /dev/nbd0 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.113 14:12:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.113 14:12:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:34.113 14:12:39 -- common/autotest_common.sh@867 -- # local i 00:06:34.113 14:12:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.113 14:12:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.113 14:12:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:34.114 14:12:39 -- common/autotest_common.sh@871 -- # break 00:06:34.114 14:12:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.114 14:12:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.114 14:12:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.114 1+0 records in 00:06:34.114 1+0 records out 00:06:34.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234747 s, 17.4 MB/s 00:06:34.114 14:12:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.114 14:12:39 -- common/autotest_common.sh@884 -- # size=4096 00:06:34.114 14:12:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.114 14:12:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.114 14:12:39 -- common/autotest_common.sh@887 -- # return 0 00:06:34.114 14:12:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.114 14:12:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.114 14:12:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.392 /dev/nbd1 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.392 14:12:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:34.392 14:12:39 -- common/autotest_common.sh@867 -- # local i 00:06:34.392 14:12:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.392 14:12:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.392 14:12:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:34.392 14:12:39 -- common/autotest_common.sh@871 -- # break 00:06:34.392 14:12:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.392 14:12:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.392 14:12:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.392 1+0 records in 00:06:34.392 1+0 records out 00:06:34.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369699 s, 11.1 MB/s 00:06:34.392 14:12:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.392 14:12:39 -- common/autotest_common.sh@884 -- # size=4096 00:06:34.392 14:12:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.392 14:12:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.392 14:12:39 -- common/autotest_common.sh@887 -- # return 0 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.392 14:12:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.650 14:12:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.650 { 00:06:34.650 "bdev_name": "Malloc0", 00:06:34.650 "nbd_device": "/dev/nbd0" 00:06:34.650 }, 00:06:34.650 { 00:06:34.650 "bdev_name": "Malloc1", 00:06:34.650 "nbd_device": "/dev/nbd1" 00:06:34.650 } 00:06:34.650 ]' 00:06:34.650 14:12:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.650 14:12:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.650 { 00:06:34.650 "bdev_name": "Malloc0", 00:06:34.650 "nbd_device": "/dev/nbd0" 00:06:34.650 }, 00:06:34.650 { 00:06:34.650 "bdev_name": "Malloc1", 00:06:34.650 "nbd_device": "/dev/nbd1" 00:06:34.650 } 00:06:34.650 ]' 00:06:34.911 14:12:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.911 /dev/nbd1' 00:06:34.911 14:12:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.911 /dev/nbd1' 00:06:34.911 14:12:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.911 14:12:40 -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.911 14:12:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.912 256+0 records in 00:06:34.912 256+0 records out 00:06:34.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620943 s, 169 MB/s 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.912 256+0 records in 00:06:34.912 256+0 records out 00:06:34.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358523 s, 29.2 MB/s 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.912 256+0 records in 00:06:34.912 256+0 records out 00:06:34.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252433 s, 41.5 MB/s 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@51 -- # local i 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.912 14:12:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@41 -- # break 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.170 14:12:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@41 -- # break 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.429 14:12:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@65 -- # true 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.687 14:12:41 -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.687 14:12:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.946 14:12:41 -- event/event.sh@35 -- # sleep 3 00:06:36.205 [2024-12-05 14:12:41.759773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.205 [2024-12-05 14:12:41.811481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.205 [2024-12-05 14:12:41.811497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.463 [2024-12-05 14:12:41.888160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.463 [2024-12-05 14:12:41.888216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.995 14:12:44 -- event/event.sh@23 -- # for i in {0..2} 00:06:38.995 spdk_app_start Round 2 00:06:38.995 14:12:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:38.995 14:12:44 -- event/event.sh@25 -- # waitforlisten 68873 /var/tmp/spdk-nbd.sock 00:06:38.995 14:12:44 -- common/autotest_common.sh@829 -- # '[' -z 68873 ']' 00:06:38.995 14:12:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.995 14:12:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.995 14:12:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.995 14:12:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.995 14:12:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.254 14:12:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.254 14:12:44 -- common/autotest_common.sh@862 -- # return 0 00:06:39.254 14:12:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.513 Malloc0 00:06:39.513 14:12:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.772 Malloc1 00:06:39.772 14:12:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@12 -- # local i 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.772 14:12:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.030 /dev/nbd0 00:06:40.030 14:12:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.030 14:12:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.030 14:12:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:40.030 14:12:45 -- common/autotest_common.sh@867 -- # local i 00:06:40.030 14:12:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.030 14:12:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.030 14:12:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:40.030 14:12:45 -- common/autotest_common.sh@871 -- # break 00:06:40.030 14:12:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.030 14:12:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.030 14:12:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.030 1+0 records in 00:06:40.030 1+0 records out 00:06:40.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197323 s, 20.8 MB/s 00:06:40.030 14:12:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.030 14:12:45 -- common/autotest_common.sh@884 -- # size=4096 00:06:40.030 14:12:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.030 14:12:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.030 14:12:45 -- common/autotest_common.sh@887 -- # return 0 00:06:40.030 14:12:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.030 14:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.030 14:12:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.289 /dev/nbd1 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.289 14:12:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:40.289 14:12:45 -- common/autotest_common.sh@867 -- # local i 00:06:40.289 14:12:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.289 14:12:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.289 14:12:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:40.289 14:12:45 -- common/autotest_common.sh@871 -- # break 00:06:40.289 14:12:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.289 14:12:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.289 14:12:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.289 1+0 records in 00:06:40.289 1+0 records out 00:06:40.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031587 s, 13.0 MB/s 00:06:40.289 14:12:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.289 14:12:45 -- common/autotest_common.sh@884 -- # size=4096 00:06:40.289 14:12:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.289 14:12:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.289 14:12:45 -- common/autotest_common.sh@887 -- # return 0 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.289 14:12:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.547 14:12:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.547 { 00:06:40.547 "bdev_name": "Malloc0", 00:06:40.547 "nbd_device": "/dev/nbd0" 00:06:40.547 }, 00:06:40.547 { 00:06:40.547 "bdev_name": "Malloc1", 00:06:40.547 "nbd_device": "/dev/nbd1" 00:06:40.547 } 00:06:40.547 ]' 00:06:40.547 14:12:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.547 { 00:06:40.547 "bdev_name": "Malloc0", 00:06:40.547 "nbd_device": "/dev/nbd0" 00:06:40.547 }, 00:06:40.547 { 00:06:40.547 "bdev_name": "Malloc1", 00:06:40.547 "nbd_device": "/dev/nbd1" 00:06:40.547 } 00:06:40.547 ]' 00:06:40.547 14:12:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.806 /dev/nbd1' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.806 /dev/nbd1' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.806 256+0 records in 00:06:40.806 256+0 records out 00:06:40.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526113 s, 199 MB/s 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.806 256+0 records in 00:06:40.806 256+0 records out 00:06:40.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257574 s, 40.7 MB/s 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.806 256+0 records in 00:06:40.806 256+0 records out 00:06:40.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272486 s, 38.5 MB/s 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@51 -- # local i 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.806 14:12:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@41 -- # break 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.065 14:12:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@41 -- # break 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.323 14:12:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.581 14:12:47 -- bdev/nbd_common.sh@65 -- # true 00:06:41.582 14:12:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.582 14:12:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.582 14:12:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.582 14:12:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.582 14:12:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.582 14:12:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.841 14:12:47 -- event/event.sh@35 -- # sleep 3 00:06:42.101 [2024-12-05 14:12:47.650927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.101 [2024-12-05 14:12:47.701853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.101 [2024-12-05 14:12:47.701863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.360 [2024-12-05 14:12:47.777159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.360 [2024-12-05 14:12:47.777241] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.896 14:12:50 -- event/event.sh@38 -- # waitforlisten 68873 /var/tmp/spdk-nbd.sock 00:06:44.896 14:12:50 -- common/autotest_common.sh@829 -- # '[' -z 68873 ']' 00:06:44.896 14:12:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.896 14:12:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.896 14:12:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.896 14:12:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.896 14:12:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.155 14:12:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.155 14:12:50 -- common/autotest_common.sh@862 -- # return 0 00:06:45.155 14:12:50 -- event/event.sh@39 -- # killprocess 68873 00:06:45.155 14:12:50 -- common/autotest_common.sh@936 -- # '[' -z 68873 ']' 00:06:45.155 14:12:50 -- common/autotest_common.sh@940 -- # kill -0 68873 00:06:45.155 14:12:50 -- common/autotest_common.sh@941 -- # uname 00:06:45.155 14:12:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.155 14:12:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68873 00:06:45.155 14:12:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.155 14:12:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.155 killing process with pid 68873 00:06:45.155 14:12:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68873' 00:06:45.155 14:12:50 -- common/autotest_common.sh@955 -- # kill 68873 00:06:45.155 14:12:50 -- common/autotest_common.sh@960 -- # wait 68873 00:06:45.414 spdk_app_start is called in Round 0. 00:06:45.414 Shutdown signal received, stop current app iteration 00:06:45.414 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:45.414 spdk_app_start is called in Round 1. 00:06:45.414 Shutdown signal received, stop current app iteration 00:06:45.414 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:45.414 spdk_app_start is called in Round 2. 00:06:45.414 Shutdown signal received, stop current app iteration 00:06:45.414 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:45.414 spdk_app_start is called in Round 3. 00:06:45.414 Shutdown signal received, stop current app iteration 00:06:45.414 14:12:50 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:45.414 14:12:50 -- event/event.sh@42 -- # return 0 00:06:45.414 00:06:45.414 real 0m18.760s 00:06:45.414 user 0m41.942s 00:06:45.414 sys 0m2.925s 00:06:45.414 14:12:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.415 14:12:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.415 ************************************ 00:06:45.415 END TEST app_repeat 00:06:45.415 ************************************ 00:06:45.415 14:12:50 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:45.415 14:12:50 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:45.415 14:12:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.415 14:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.415 14:12:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.415 ************************************ 00:06:45.415 START TEST cpu_locks 00:06:45.415 ************************************ 00:06:45.415 14:12:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:45.415 * Looking for test storage... 00:06:45.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:45.674 14:12:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.674 14:12:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.674 14:12:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.674 14:12:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.674 14:12:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.674 14:12:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.674 14:12:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.674 14:12:51 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.674 14:12:51 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.674 14:12:51 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.674 14:12:51 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.674 14:12:51 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.674 14:12:51 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.674 14:12:51 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.674 14:12:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.674 14:12:51 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.674 14:12:51 -- scripts/common.sh@344 -- # : 1 00:06:45.674 14:12:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.674 14:12:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.674 14:12:51 -- scripts/common.sh@364 -- # decimal 1 00:06:45.674 14:12:51 -- scripts/common.sh@352 -- # local d=1 00:06:45.674 14:12:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.674 14:12:51 -- scripts/common.sh@354 -- # echo 1 00:06:45.674 14:12:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.674 14:12:51 -- scripts/common.sh@365 -- # decimal 2 00:06:45.674 14:12:51 -- scripts/common.sh@352 -- # local d=2 00:06:45.674 14:12:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.674 14:12:51 -- scripts/common.sh@354 -- # echo 2 00:06:45.674 14:12:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.674 14:12:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.674 14:12:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.674 14:12:51 -- scripts/common.sh@367 -- # return 0 00:06:45.674 14:12:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.674 14:12:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.674 --rc genhtml_branch_coverage=1 00:06:45.674 --rc genhtml_function_coverage=1 00:06:45.674 --rc genhtml_legend=1 00:06:45.674 --rc geninfo_all_blocks=1 00:06:45.674 --rc geninfo_unexecuted_blocks=1 00:06:45.674 00:06:45.674 ' 00:06:45.674 14:12:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.674 --rc genhtml_branch_coverage=1 00:06:45.674 --rc genhtml_function_coverage=1 00:06:45.674 --rc genhtml_legend=1 00:06:45.674 --rc geninfo_all_blocks=1 00:06:45.674 --rc geninfo_unexecuted_blocks=1 00:06:45.674 00:06:45.674 ' 00:06:45.674 14:12:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.674 --rc genhtml_branch_coverage=1 00:06:45.674 --rc genhtml_function_coverage=1 00:06:45.674 --rc genhtml_legend=1 00:06:45.674 --rc geninfo_all_blocks=1 00:06:45.674 --rc geninfo_unexecuted_blocks=1 00:06:45.674 00:06:45.674 ' 00:06:45.674 14:12:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.674 --rc genhtml_branch_coverage=1 00:06:45.674 --rc genhtml_function_coverage=1 00:06:45.674 --rc genhtml_legend=1 00:06:45.674 --rc geninfo_all_blocks=1 00:06:45.674 --rc geninfo_unexecuted_blocks=1 00:06:45.674 00:06:45.674 ' 00:06:45.674 14:12:51 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:45.674 14:12:51 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:45.674 14:12:51 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:45.674 14:12:51 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:45.674 14:12:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.674 14:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.674 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.674 ************************************ 00:06:45.674 START TEST default_locks 00:06:45.674 ************************************ 00:06:45.674 14:12:51 -- common/autotest_common.sh@1114 -- # default_locks 00:06:45.674 14:12:51 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69505 00:06:45.674 14:12:51 -- event/cpu_locks.sh@47 -- # waitforlisten 69505 00:06:45.674 14:12:51 -- common/autotest_common.sh@829 -- # '[' -z 69505 ']' 00:06:45.674 14:12:51 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.674 14:12:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.674 14:12:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.674 14:12:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.674 14:12:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.674 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.674 [2024-12-05 14:12:51.206551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.674 [2024-12-05 14:12:51.206630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69505 ] 00:06:45.933 [2024-12-05 14:12:51.330817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.933 [2024-12-05 14:12:51.411465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.933 [2024-12-05 14:12:51.411637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.501 14:12:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.501 14:12:52 -- common/autotest_common.sh@862 -- # return 0 00:06:46.501 14:12:52 -- event/cpu_locks.sh@49 -- # locks_exist 69505 00:06:46.501 14:12:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.501 14:12:52 -- event/cpu_locks.sh@22 -- # lslocks -p 69505 00:06:46.760 14:12:52 -- event/cpu_locks.sh@50 -- # killprocess 69505 00:06:46.760 14:12:52 -- common/autotest_common.sh@936 -- # '[' -z 69505 ']' 00:06:46.760 14:12:52 -- common/autotest_common.sh@940 -- # kill -0 69505 00:06:46.760 14:12:52 -- common/autotest_common.sh@941 -- # uname 00:06:46.760 14:12:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.760 14:12:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69505 00:06:46.760 14:12:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.760 14:12:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.760 killing process with pid 69505 00:06:46.760 14:12:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69505' 00:06:46.760 14:12:52 -- common/autotest_common.sh@955 -- # kill 69505 00:06:46.760 14:12:52 -- common/autotest_common.sh@960 -- # wait 69505 00:06:47.329 14:12:52 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69505 00:06:47.330 14:12:52 -- common/autotest_common.sh@650 -- # local es=0 00:06:47.330 14:12:52 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69505 00:06:47.330 14:12:52 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:47.330 14:12:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.330 14:12:52 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:47.330 14:12:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.330 14:12:52 -- common/autotest_common.sh@653 -- # waitforlisten 69505 00:06:47.330 14:12:52 -- common/autotest_common.sh@829 -- # '[' -z 69505 ']' 00:06:47.330 14:12:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.330 14:12:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.330 14:12:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.330 14:12:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.330 14:12:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69505) - No such process 00:06:47.330 ERROR: process (pid: 69505) is no longer running 00:06:47.330 14:12:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.330 14:12:52 -- common/autotest_common.sh@862 -- # return 1 00:06:47.330 14:12:52 -- common/autotest_common.sh@653 -- # es=1 00:06:47.330 14:12:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.330 14:12:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.330 14:12:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.330 14:12:52 -- event/cpu_locks.sh@54 -- # no_locks 00:06:47.330 14:12:52 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:47.330 14:12:52 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:47.330 14:12:52 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:47.330 00:06:47.330 real 0m1.731s 00:06:47.330 user 0m1.644s 00:06:47.330 sys 0m0.580s 00:06:47.330 14:12:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.330 14:12:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.330 ************************************ 00:06:47.330 END TEST default_locks 00:06:47.330 ************************************ 00:06:47.330 14:12:52 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:47.330 14:12:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.330 14:12:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.330 14:12:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.330 ************************************ 00:06:47.330 START TEST default_locks_via_rpc 00:06:47.330 ************************************ 00:06:47.330 14:12:52 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:47.330 14:12:52 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69563 00:06:47.330 14:12:52 -- event/cpu_locks.sh@63 -- # waitforlisten 69563 00:06:47.330 14:12:52 -- common/autotest_common.sh@829 -- # '[' -z 69563 ']' 00:06:47.330 14:12:52 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.330 14:12:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.330 14:12:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.330 14:12:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.330 14:12:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.330 14:12:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.589 [2024-12-05 14:12:53.002385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.589 [2024-12-05 14:12:53.002491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69563 ] 00:06:47.589 [2024-12-05 14:12:53.136541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.589 [2024-12-05 14:12:53.207283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.589 [2024-12-05 14:12:53.207432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.528 14:12:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.528 14:12:53 -- common/autotest_common.sh@862 -- # return 0 00:06:48.528 14:12:53 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.528 14:12:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.528 14:12:53 -- common/autotest_common.sh@10 -- # set +x 00:06:48.528 14:12:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.528 14:12:53 -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.528 14:12:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.528 14:12:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.528 14:12:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.528 14:12:53 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.528 14:12:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.528 14:12:53 -- common/autotest_common.sh@10 -- # set +x 00:06:48.528 14:12:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.528 14:12:53 -- event/cpu_locks.sh@71 -- # locks_exist 69563 00:06:48.528 14:12:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.528 14:12:53 -- event/cpu_locks.sh@22 -- # lslocks -p 69563 00:06:48.787 14:12:54 -- event/cpu_locks.sh@73 -- # killprocess 69563 00:06:48.787 14:12:54 -- common/autotest_common.sh@936 -- # '[' -z 69563 ']' 00:06:48.787 14:12:54 -- common/autotest_common.sh@940 -- # kill -0 69563 00:06:48.787 14:12:54 -- common/autotest_common.sh@941 -- # uname 00:06:48.787 14:12:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.787 14:12:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69563 00:06:49.046 14:12:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.046 14:12:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.046 killing process with pid 69563 00:06:49.046 14:12:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69563' 00:06:49.046 14:12:54 -- common/autotest_common.sh@955 -- # kill 69563 00:06:49.046 14:12:54 -- common/autotest_common.sh@960 -- # wait 69563 00:06:49.304 00:06:49.304 real 0m1.865s 00:06:49.304 user 0m1.945s 00:06:49.304 sys 0m0.628s 00:06:49.304 14:12:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.304 14:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.304 ************************************ 00:06:49.304 END TEST default_locks_via_rpc 00:06:49.304 ************************************ 00:06:49.304 14:12:54 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.304 14:12:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.304 14:12:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.304 14:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.304 ************************************ 00:06:49.304 START TEST non_locking_app_on_locked_coremask 00:06:49.304 ************************************ 00:06:49.304 14:12:54 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:49.304 14:12:54 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69632 00:06:49.304 14:12:54 -- event/cpu_locks.sh@81 -- # waitforlisten 69632 /var/tmp/spdk.sock 00:06:49.304 14:12:54 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.304 14:12:54 -- common/autotest_common.sh@829 -- # '[' -z 69632 ']' 00:06:49.304 14:12:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.304 14:12:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.304 14:12:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.304 14:12:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.304 14:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.304 [2024-12-05 14:12:54.924384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.304 [2024-12-05 14:12:54.924490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69632 ] 00:06:49.563 [2024-12-05 14:12:55.062296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.563 [2024-12-05 14:12:55.116233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.563 [2024-12-05 14:12:55.116387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.501 14:12:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.501 14:12:55 -- common/autotest_common.sh@862 -- # return 0 00:06:50.501 14:12:55 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69660 00:06:50.501 14:12:55 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:50.501 14:12:55 -- event/cpu_locks.sh@85 -- # waitforlisten 69660 /var/tmp/spdk2.sock 00:06:50.501 14:12:55 -- common/autotest_common.sh@829 -- # '[' -z 69660 ']' 00:06:50.501 14:12:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.501 14:12:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.501 14:12:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.501 14:12:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.501 14:12:55 -- common/autotest_common.sh@10 -- # set +x 00:06:50.501 [2024-12-05 14:12:55.985383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.501 [2024-12-05 14:12:55.985480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69660 ] 00:06:50.501 [2024-12-05 14:12:56.126954] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.501 [2024-12-05 14:12:56.127011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.760 [2024-12-05 14:12:56.249419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.760 [2024-12-05 14:12:56.249562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.324 14:12:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.324 14:12:56 -- common/autotest_common.sh@862 -- # return 0 00:06:51.324 14:12:56 -- event/cpu_locks.sh@87 -- # locks_exist 69632 00:06:51.324 14:12:56 -- event/cpu_locks.sh@22 -- # lslocks -p 69632 00:06:51.324 14:12:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.889 14:12:57 -- event/cpu_locks.sh@89 -- # killprocess 69632 00:06:51.889 14:12:57 -- common/autotest_common.sh@936 -- # '[' -z 69632 ']' 00:06:51.889 14:12:57 -- common/autotest_common.sh@940 -- # kill -0 69632 00:06:51.889 14:12:57 -- common/autotest_common.sh@941 -- # uname 00:06:51.889 14:12:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.889 14:12:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69632 00:06:51.889 14:12:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.889 14:12:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.889 killing process with pid 69632 00:06:51.889 14:12:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69632' 00:06:51.889 14:12:57 -- common/autotest_common.sh@955 -- # kill 69632 00:06:51.889 14:12:57 -- common/autotest_common.sh@960 -- # wait 69632 00:06:52.455 14:12:58 -- event/cpu_locks.sh@90 -- # killprocess 69660 00:06:52.455 14:12:58 -- common/autotest_common.sh@936 -- # '[' -z 69660 ']' 00:06:52.455 14:12:58 -- common/autotest_common.sh@940 -- # kill -0 69660 00:06:52.455 14:12:58 -- common/autotest_common.sh@941 -- # uname 00:06:52.455 14:12:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.455 14:12:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69660 00:06:52.713 14:12:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.713 14:12:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.713 killing process with pid 69660 00:06:52.713 14:12:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69660' 00:06:52.713 14:12:58 -- common/autotest_common.sh@955 -- # kill 69660 00:06:52.713 14:12:58 -- common/autotest_common.sh@960 -- # wait 69660 00:06:52.972 00:06:52.972 real 0m3.618s 00:06:52.972 user 0m4.075s 00:06:52.972 sys 0m0.934s 00:06:52.972 14:12:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.972 14:12:58 -- common/autotest_common.sh@10 -- # set +x 00:06:52.972 ************************************ 00:06:52.972 END TEST non_locking_app_on_locked_coremask 00:06:52.972 ************************************ 00:06:52.972 14:12:58 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.972 14:12:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.972 14:12:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.972 14:12:58 -- common/autotest_common.sh@10 -- # set +x 00:06:52.972 ************************************ 00:06:52.972 START TEST locking_app_on_unlocked_coremask 00:06:52.972 ************************************ 00:06:52.972 14:12:58 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:52.972 14:12:58 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69734 00:06:52.972 14:12:58 -- event/cpu_locks.sh@99 -- # waitforlisten 69734 /var/tmp/spdk.sock 00:06:52.972 14:12:58 -- common/autotest_common.sh@829 -- # '[' -z 69734 ']' 00:06:52.972 14:12:58 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.972 14:12:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.972 14:12:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.972 14:12:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.972 14:12:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.972 14:12:58 -- common/autotest_common.sh@10 -- # set +x 00:06:52.972 [2024-12-05 14:12:58.582228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.972 [2024-12-05 14:12:58.582321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69734 ] 00:06:53.231 [2024-12-05 14:12:58.707244] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.231 [2024-12-05 14:12:58.707280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.231 [2024-12-05 14:12:58.765357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.231 [2024-12-05 14:12:58.765567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.166 14:12:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.166 14:12:59 -- common/autotest_common.sh@862 -- # return 0 00:06:54.166 14:12:59 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.166 14:12:59 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69762 00:06:54.166 14:12:59 -- event/cpu_locks.sh@103 -- # waitforlisten 69762 /var/tmp/spdk2.sock 00:06:54.166 14:12:59 -- common/autotest_common.sh@829 -- # '[' -z 69762 ']' 00:06:54.166 14:12:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.166 14:12:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.166 14:12:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.166 14:12:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.166 14:12:59 -- common/autotest_common.sh@10 -- # set +x 00:06:54.166 [2024-12-05 14:12:59.572180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.166 [2024-12-05 14:12:59.572259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69762 ] 00:06:54.166 [2024-12-05 14:12:59.704440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.426 [2024-12-05 14:12:59.814510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:54.426 [2024-12-05 14:12:59.814650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.993 14:13:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.993 14:13:00 -- common/autotest_common.sh@862 -- # return 0 00:06:54.993 14:13:00 -- event/cpu_locks.sh@105 -- # locks_exist 69762 00:06:54.993 14:13:00 -- event/cpu_locks.sh@22 -- # lslocks -p 69762 00:06:54.993 14:13:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.926 14:13:01 -- event/cpu_locks.sh@107 -- # killprocess 69734 00:06:55.926 14:13:01 -- common/autotest_common.sh@936 -- # '[' -z 69734 ']' 00:06:55.926 14:13:01 -- common/autotest_common.sh@940 -- # kill -0 69734 00:06:55.926 14:13:01 -- common/autotest_common.sh@941 -- # uname 00:06:55.926 14:13:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.926 14:13:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69734 00:06:55.926 14:13:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.926 14:13:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.926 killing process with pid 69734 00:06:55.926 14:13:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69734' 00:06:55.926 14:13:01 -- common/autotest_common.sh@955 -- # kill 69734 00:06:55.926 14:13:01 -- common/autotest_common.sh@960 -- # wait 69734 00:06:56.492 14:13:02 -- event/cpu_locks.sh@108 -- # killprocess 69762 00:06:56.492 14:13:02 -- common/autotest_common.sh@936 -- # '[' -z 69762 ']' 00:06:56.492 14:13:02 -- common/autotest_common.sh@940 -- # kill -0 69762 00:06:56.492 14:13:02 -- common/autotest_common.sh@941 -- # uname 00:06:56.492 14:13:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.492 14:13:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69762 00:06:56.492 14:13:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.492 14:13:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.492 killing process with pid 69762 00:06:56.492 14:13:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69762' 00:06:56.492 14:13:02 -- common/autotest_common.sh@955 -- # kill 69762 00:06:56.492 14:13:02 -- common/autotest_common.sh@960 -- # wait 69762 00:06:57.060 00:06:57.060 real 0m3.892s 00:06:57.060 user 0m4.343s 00:06:57.060 sys 0m1.077s 00:06:57.060 14:13:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.060 14:13:02 -- common/autotest_common.sh@10 -- # set +x 00:06:57.060 ************************************ 00:06:57.060 END TEST locking_app_on_unlocked_coremask 00:06:57.060 ************************************ 00:06:57.060 14:13:02 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:57.060 14:13:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.060 14:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.060 14:13:02 -- common/autotest_common.sh@10 -- # set +x 00:06:57.060 ************************************ 00:06:57.060 START TEST locking_app_on_locked_coremask 00:06:57.060 ************************************ 00:06:57.060 14:13:02 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:57.060 14:13:02 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69841 00:06:57.060 14:13:02 -- event/cpu_locks.sh@116 -- # waitforlisten 69841 /var/tmp/spdk.sock 00:06:57.060 14:13:02 -- common/autotest_common.sh@829 -- # '[' -z 69841 ']' 00:06:57.060 14:13:02 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.060 14:13:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.060 14:13:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.060 14:13:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.060 14:13:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.060 14:13:02 -- common/autotest_common.sh@10 -- # set +x 00:06:57.060 [2024-12-05 14:13:02.543759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.060 [2024-12-05 14:13:02.543891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69841 ] 00:06:57.060 [2024-12-05 14:13:02.685160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.319 [2024-12-05 14:13:02.754082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:57.319 [2024-12-05 14:13:02.754299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.256 14:13:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.256 14:13:03 -- common/autotest_common.sh@862 -- # return 0 00:06:58.256 14:13:03 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69869 00:06:58.256 14:13:03 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69869 /var/tmp/spdk2.sock 00:06:58.256 14:13:03 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.256 14:13:03 -- common/autotest_common.sh@650 -- # local es=0 00:06:58.256 14:13:03 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69869 /var/tmp/spdk2.sock 00:06:58.256 14:13:03 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.256 14:13:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.256 14:13:03 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.256 14:13:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.256 14:13:03 -- common/autotest_common.sh@653 -- # waitforlisten 69869 /var/tmp/spdk2.sock 00:06:58.256 14:13:03 -- common/autotest_common.sh@829 -- # '[' -z 69869 ']' 00:06:58.256 14:13:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.256 14:13:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.256 14:13:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.256 14:13:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.256 14:13:03 -- common/autotest_common.sh@10 -- # set +x 00:06:58.256 [2024-12-05 14:13:03.603107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.256 [2024-12-05 14:13:03.603223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69869 ] 00:06:58.256 [2024-12-05 14:13:03.742341] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69841 has claimed it. 00:06:58.256 [2024-12-05 14:13:03.742392] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.823 ERROR: process (pid: 69869) is no longer running 00:06:58.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69869) - No such process 00:06:58.823 14:13:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.823 14:13:04 -- common/autotest_common.sh@862 -- # return 1 00:06:58.824 14:13:04 -- common/autotest_common.sh@653 -- # es=1 00:06:58.824 14:13:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.824 14:13:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.824 14:13:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.824 14:13:04 -- event/cpu_locks.sh@122 -- # locks_exist 69841 00:06:58.824 14:13:04 -- event/cpu_locks.sh@22 -- # lslocks -p 69841 00:06:58.824 14:13:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.082 14:13:04 -- event/cpu_locks.sh@124 -- # killprocess 69841 00:06:59.082 14:13:04 -- common/autotest_common.sh@936 -- # '[' -z 69841 ']' 00:06:59.082 14:13:04 -- common/autotest_common.sh@940 -- # kill -0 69841 00:06:59.082 14:13:04 -- common/autotest_common.sh@941 -- # uname 00:06:59.341 14:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:59.341 14:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69841 00:06:59.341 14:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:59.341 14:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:59.341 killing process with pid 69841 00:06:59.341 14:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69841' 00:06:59.341 14:13:04 -- common/autotest_common.sh@955 -- # kill 69841 00:06:59.341 14:13:04 -- common/autotest_common.sh@960 -- # wait 69841 00:06:59.600 00:06:59.600 real 0m2.629s 00:06:59.600 user 0m3.082s 00:06:59.600 sys 0m0.640s 00:06:59.600 14:13:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.600 14:13:05 -- common/autotest_common.sh@10 -- # set +x 00:06:59.600 ************************************ 00:06:59.600 END TEST locking_app_on_locked_coremask 00:06:59.600 ************************************ 00:06:59.600 14:13:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:59.600 14:13:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.600 14:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.600 14:13:05 -- common/autotest_common.sh@10 -- # set +x 00:06:59.600 ************************************ 00:06:59.600 START TEST locking_overlapped_coremask 00:06:59.600 ************************************ 00:06:59.600 14:13:05 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:59.600 14:13:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69920 00:06:59.600 14:13:05 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:59.600 14:13:05 -- event/cpu_locks.sh@133 -- # waitforlisten 69920 /var/tmp/spdk.sock 00:06:59.600 14:13:05 -- common/autotest_common.sh@829 -- # '[' -z 69920 ']' 00:06:59.600 14:13:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.600 14:13:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.600 14:13:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.600 14:13:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.600 14:13:05 -- common/autotest_common.sh@10 -- # set +x 00:06:59.600 [2024-12-05 14:13:05.223657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.600 [2024-12-05 14:13:05.223762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69920 ] 00:06:59.858 [2024-12-05 14:13:05.354763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.858 [2024-12-05 14:13:05.412593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:59.858 [2024-12-05 14:13:05.412852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.858 [2024-12-05 14:13:05.413247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.858 [2024-12-05 14:13:05.413253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.791 14:13:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.791 14:13:06 -- common/autotest_common.sh@862 -- # return 0 00:07:00.791 14:13:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69950 00:07:00.791 14:13:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69950 /var/tmp/spdk2.sock 00:07:00.791 14:13:06 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:00.791 14:13:06 -- common/autotest_common.sh@650 -- # local es=0 00:07:00.791 14:13:06 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69950 /var/tmp/spdk2.sock 00:07:00.791 14:13:06 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:00.791 14:13:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.791 14:13:06 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:00.791 14:13:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.791 14:13:06 -- common/autotest_common.sh@653 -- # waitforlisten 69950 /var/tmp/spdk2.sock 00:07:00.791 14:13:06 -- common/autotest_common.sh@829 -- # '[' -z 69950 ']' 00:07:00.791 14:13:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.791 14:13:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.791 14:13:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.791 14:13:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.791 14:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:00.791 [2024-12-05 14:13:06.266707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.791 [2024-12-05 14:13:06.266822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69950 ] 00:07:00.791 [2024-12-05 14:13:06.410965] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69920 has claimed it. 00:07:00.791 [2024-12-05 14:13:06.411022] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.357 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69950) - No such process 00:07:01.357 ERROR: process (pid: 69950) is no longer running 00:07:01.357 14:13:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.357 14:13:06 -- common/autotest_common.sh@862 -- # return 1 00:07:01.357 14:13:06 -- common/autotest_common.sh@653 -- # es=1 00:07:01.357 14:13:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.357 14:13:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.357 14:13:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.357 14:13:06 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:01.357 14:13:06 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.357 14:13:06 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.357 14:13:06 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.357 14:13:06 -- event/cpu_locks.sh@141 -- # killprocess 69920 00:07:01.357 14:13:06 -- common/autotest_common.sh@936 -- # '[' -z 69920 ']' 00:07:01.357 14:13:06 -- common/autotest_common.sh@940 -- # kill -0 69920 00:07:01.357 14:13:06 -- common/autotest_common.sh@941 -- # uname 00:07:01.357 14:13:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.357 14:13:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69920 00:07:01.357 14:13:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.357 14:13:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.357 14:13:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69920' 00:07:01.357 killing process with pid 69920 00:07:01.357 14:13:06 -- common/autotest_common.sh@955 -- # kill 69920 00:07:01.357 14:13:06 -- common/autotest_common.sh@960 -- # wait 69920 00:07:01.930 00:07:01.930 real 0m2.152s 00:07:01.930 user 0m6.114s 00:07:01.930 sys 0m0.439s 00:07:01.930 14:13:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.930 14:13:07 -- common/autotest_common.sh@10 -- # set +x 00:07:01.930 ************************************ 00:07:01.930 END TEST locking_overlapped_coremask 00:07:01.930 ************************************ 00:07:01.930 14:13:07 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:01.930 14:13:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.930 14:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.930 14:13:07 -- common/autotest_common.sh@10 -- # set +x 00:07:01.930 ************************************ 00:07:01.930 START TEST locking_overlapped_coremask_via_rpc 00:07:01.930 ************************************ 00:07:01.930 14:13:07 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:01.930 14:13:07 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69998 00:07:01.930 14:13:07 -- event/cpu_locks.sh@149 -- # waitforlisten 69998 /var/tmp/spdk.sock 00:07:01.930 14:13:07 -- common/autotest_common.sh@829 -- # '[' -z 69998 ']' 00:07:01.930 14:13:07 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:01.930 14:13:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.930 14:13:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.930 14:13:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.930 14:13:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.930 14:13:07 -- common/autotest_common.sh@10 -- # set +x 00:07:01.930 [2024-12-05 14:13:07.415143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.930 [2024-12-05 14:13:07.415232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69998 ] 00:07:01.930 [2024-12-05 14:13:07.546269] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.930 [2024-12-05 14:13:07.546306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.211 [2024-12-05 14:13:07.602089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.211 [2024-12-05 14:13:07.602660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.211 [2024-12-05 14:13:07.602797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.211 [2024-12-05 14:13:07.602904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.819 14:13:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.819 14:13:08 -- common/autotest_common.sh@862 -- # return 0 00:07:02.819 14:13:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70026 00:07:02.819 14:13:08 -- event/cpu_locks.sh@153 -- # waitforlisten 70026 /var/tmp/spdk2.sock 00:07:02.819 14:13:08 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:02.819 14:13:08 -- common/autotest_common.sh@829 -- # '[' -z 70026 ']' 00:07:02.819 14:13:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.819 14:13:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.819 14:13:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.819 14:13:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.819 14:13:08 -- common/autotest_common.sh@10 -- # set +x 00:07:02.819 [2024-12-05 14:13:08.441366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.819 [2024-12-05 14:13:08.441467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70026 ] 00:07:03.077 [2024-12-05 14:13:08.584099] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.077 [2024-12-05 14:13:08.584134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.077 [2024-12-05 14:13:08.717688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.077 [2024-12-05 14:13:08.721960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.077 [2024-12-05 14:13:08.722082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.077 [2024-12-05 14:13:08.722083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.010 14:13:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.010 14:13:09 -- common/autotest_common.sh@862 -- # return 0 00:07:04.010 14:13:09 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.010 14:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.010 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.010 14:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.010 14:13:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.010 14:13:09 -- common/autotest_common.sh@650 -- # local es=0 00:07:04.010 14:13:09 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.010 14:13:09 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:04.010 14:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.010 14:13:09 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:04.010 14:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.010 14:13:09 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.010 14:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.010 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.010 [2024-12-05 14:13:09.426988] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69998 has claimed it. 00:07:04.010 2024/12/05 14:13:09 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:04.010 request: 00:07:04.010 { 00:07:04.010 "method": "framework_enable_cpumask_locks", 00:07:04.010 "params": {} 00:07:04.010 } 00:07:04.010 Got JSON-RPC error response 00:07:04.010 GoRPCClient: error on JSON-RPC call 00:07:04.010 14:13:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:04.010 14:13:09 -- common/autotest_common.sh@653 -- # es=1 00:07:04.010 14:13:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.010 14:13:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.010 14:13:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.010 14:13:09 -- event/cpu_locks.sh@158 -- # waitforlisten 69998 /var/tmp/spdk.sock 00:07:04.010 14:13:09 -- common/autotest_common.sh@829 -- # '[' -z 69998 ']' 00:07:04.010 14:13:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.010 14:13:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.010 14:13:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.010 14:13:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.010 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.267 14:13:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.268 14:13:09 -- common/autotest_common.sh@862 -- # return 0 00:07:04.268 14:13:09 -- event/cpu_locks.sh@159 -- # waitforlisten 70026 /var/tmp/spdk2.sock 00:07:04.268 14:13:09 -- common/autotest_common.sh@829 -- # '[' -z 70026 ']' 00:07:04.268 14:13:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.268 14:13:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.268 14:13:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.268 14:13:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.268 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.525 ************************************ 00:07:04.525 END TEST locking_overlapped_coremask_via_rpc 00:07:04.525 ************************************ 00:07:04.525 14:13:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.525 14:13:09 -- common/autotest_common.sh@862 -- # return 0 00:07:04.525 14:13:09 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:04.525 14:13:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:04.525 14:13:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:04.525 14:13:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:04.525 00:07:04.525 real 0m2.579s 00:07:04.525 user 0m1.283s 00:07:04.525 sys 0m0.233s 00:07:04.525 14:13:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.525 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:04.525 14:13:09 -- event/cpu_locks.sh@174 -- # cleanup 00:07:04.525 14:13:09 -- event/cpu_locks.sh@15 -- # [[ -z 69998 ]] 00:07:04.525 14:13:09 -- event/cpu_locks.sh@15 -- # killprocess 69998 00:07:04.525 14:13:09 -- common/autotest_common.sh@936 -- # '[' -z 69998 ']' 00:07:04.525 14:13:09 -- common/autotest_common.sh@940 -- # kill -0 69998 00:07:04.525 14:13:09 -- common/autotest_common.sh@941 -- # uname 00:07:04.525 14:13:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.525 14:13:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69998 00:07:04.525 killing process with pid 69998 00:07:04.525 14:13:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.525 14:13:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.525 14:13:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69998' 00:07:04.525 14:13:10 -- common/autotest_common.sh@955 -- # kill 69998 00:07:04.525 14:13:10 -- common/autotest_common.sh@960 -- # wait 69998 00:07:05.090 14:13:10 -- event/cpu_locks.sh@16 -- # [[ -z 70026 ]] 00:07:05.090 14:13:10 -- event/cpu_locks.sh@16 -- # killprocess 70026 00:07:05.090 14:13:10 -- common/autotest_common.sh@936 -- # '[' -z 70026 ']' 00:07:05.090 14:13:10 -- common/autotest_common.sh@940 -- # kill -0 70026 00:07:05.090 14:13:10 -- common/autotest_common.sh@941 -- # uname 00:07:05.090 14:13:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.090 14:13:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70026 00:07:05.346 killing process with pid 70026 00:07:05.346 14:13:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:05.346 14:13:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:05.346 14:13:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70026' 00:07:05.346 14:13:10 -- common/autotest_common.sh@955 -- # kill 70026 00:07:05.346 14:13:10 -- common/autotest_common.sh@960 -- # wait 70026 00:07:05.603 14:13:11 -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.603 14:13:11 -- event/cpu_locks.sh@1 -- # cleanup 00:07:05.603 14:13:11 -- event/cpu_locks.sh@15 -- # [[ -z 69998 ]] 00:07:05.603 14:13:11 -- event/cpu_locks.sh@15 -- # killprocess 69998 00:07:05.603 14:13:11 -- common/autotest_common.sh@936 -- # '[' -z 69998 ']' 00:07:05.603 14:13:11 -- common/autotest_common.sh@940 -- # kill -0 69998 00:07:05.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69998) - No such process 00:07:05.603 Process with pid 69998 is not found 00:07:05.604 14:13:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69998 is not found' 00:07:05.604 14:13:11 -- event/cpu_locks.sh@16 -- # [[ -z 70026 ]] 00:07:05.604 14:13:11 -- event/cpu_locks.sh@16 -- # killprocess 70026 00:07:05.604 14:13:11 -- common/autotest_common.sh@936 -- # '[' -z 70026 ']' 00:07:05.604 14:13:11 -- common/autotest_common.sh@940 -- # kill -0 70026 00:07:05.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70026) - No such process 00:07:05.604 Process with pid 70026 is not found 00:07:05.604 14:13:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70026 is not found' 00:07:05.604 14:13:11 -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.604 00:07:05.604 real 0m20.100s 00:07:05.604 user 0m36.472s 00:07:05.604 sys 0m5.402s 00:07:05.604 14:13:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.604 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:05.604 ************************************ 00:07:05.604 END TEST cpu_locks 00:07:05.604 ************************************ 00:07:05.604 00:07:05.604 real 0m47.076s 00:07:05.604 user 1m30.604s 00:07:05.604 sys 0m9.139s 00:07:05.604 ************************************ 00:07:05.604 END TEST event 00:07:05.604 ************************************ 00:07:05.604 14:13:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.604 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:05.604 14:13:11 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:05.604 14:13:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.604 14:13:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.604 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:05.604 ************************************ 00:07:05.604 START TEST thread 00:07:05.604 ************************************ 00:07:05.604 14:13:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:05.862 * Looking for test storage... 00:07:05.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:05.862 14:13:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.862 14:13:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.862 14:13:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.862 14:13:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.862 14:13:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.862 14:13:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.862 14:13:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.862 14:13:11 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.862 14:13:11 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.862 14:13:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.862 14:13:11 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.862 14:13:11 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.862 14:13:11 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.862 14:13:11 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.862 14:13:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.862 14:13:11 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.862 14:13:11 -- scripts/common.sh@344 -- # : 1 00:07:05.862 14:13:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.862 14:13:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.862 14:13:11 -- scripts/common.sh@364 -- # decimal 1 00:07:05.862 14:13:11 -- scripts/common.sh@352 -- # local d=1 00:07:05.862 14:13:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.862 14:13:11 -- scripts/common.sh@354 -- # echo 1 00:07:05.862 14:13:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.862 14:13:11 -- scripts/common.sh@365 -- # decimal 2 00:07:05.862 14:13:11 -- scripts/common.sh@352 -- # local d=2 00:07:05.862 14:13:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.862 14:13:11 -- scripts/common.sh@354 -- # echo 2 00:07:05.862 14:13:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.862 14:13:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.862 14:13:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.862 14:13:11 -- scripts/common.sh@367 -- # return 0 00:07:05.862 14:13:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.862 14:13:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.862 --rc genhtml_branch_coverage=1 00:07:05.862 --rc genhtml_function_coverage=1 00:07:05.862 --rc genhtml_legend=1 00:07:05.862 --rc geninfo_all_blocks=1 00:07:05.862 --rc geninfo_unexecuted_blocks=1 00:07:05.862 00:07:05.862 ' 00:07:05.862 14:13:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.862 --rc genhtml_branch_coverage=1 00:07:05.862 --rc genhtml_function_coverage=1 00:07:05.862 --rc genhtml_legend=1 00:07:05.862 --rc geninfo_all_blocks=1 00:07:05.862 --rc geninfo_unexecuted_blocks=1 00:07:05.862 00:07:05.862 ' 00:07:05.862 14:13:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.862 --rc genhtml_branch_coverage=1 00:07:05.862 --rc genhtml_function_coverage=1 00:07:05.862 --rc genhtml_legend=1 00:07:05.862 --rc geninfo_all_blocks=1 00:07:05.862 --rc geninfo_unexecuted_blocks=1 00:07:05.862 00:07:05.862 ' 00:07:05.862 14:13:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.862 --rc genhtml_branch_coverage=1 00:07:05.862 --rc genhtml_function_coverage=1 00:07:05.862 --rc genhtml_legend=1 00:07:05.862 --rc geninfo_all_blocks=1 00:07:05.862 --rc geninfo_unexecuted_blocks=1 00:07:05.862 00:07:05.862 ' 00:07:05.862 14:13:11 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.862 14:13:11 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:05.862 14:13:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.862 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:07:05.862 ************************************ 00:07:05.862 START TEST thread_poller_perf 00:07:05.862 ************************************ 00:07:05.862 14:13:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.862 [2024-12-05 14:13:11.398183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.862 [2024-12-05 14:13:11.398413] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70185 ] 00:07:06.120 [2024-12-05 14:13:11.529615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.120 [2024-12-05 14:13:11.603094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.120 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.055 [2024-12-05T14:13:12.703Z] ====================================== 00:07:07.055 [2024-12-05T14:13:12.703Z] busy:2211818284 (cyc) 00:07:07.055 [2024-12-05T14:13:12.703Z] total_run_count: 389000 00:07:07.055 [2024-12-05T14:13:12.703Z] tsc_hz: 2200000000 (cyc) 00:07:07.055 [2024-12-05T14:13:12.703Z] ====================================== 00:07:07.055 [2024-12-05T14:13:12.703Z] poller_cost: 5685 (cyc), 2584 (nsec) 00:07:07.055 00:07:07.055 real 0m1.298s 00:07:07.055 user 0m1.121s 00:07:07.055 sys 0m0.069s 00:07:07.055 14:13:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.055 ************************************ 00:07:07.055 END TEST thread_poller_perf 00:07:07.055 ************************************ 00:07:07.055 14:13:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.313 14:13:12 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.313 14:13:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:07.313 14:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.313 14:13:12 -- common/autotest_common.sh@10 -- # set +x 00:07:07.313 ************************************ 00:07:07.313 START TEST thread_poller_perf 00:07:07.313 ************************************ 00:07:07.313 14:13:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.313 [2024-12-05 14:13:12.751832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.313 [2024-12-05 14:13:12.751938] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70221 ] 00:07:07.313 [2024-12-05 14:13:12.889801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.571 [2024-12-05 14:13:12.961665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.571 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:08.505 [2024-12-05T14:13:14.153Z] ====================================== 00:07:08.505 [2024-12-05T14:13:14.153Z] busy:2203177886 (cyc) 00:07:08.505 [2024-12-05T14:13:14.153Z] total_run_count: 5386000 00:07:08.505 [2024-12-05T14:13:14.153Z] tsc_hz: 2200000000 (cyc) 00:07:08.505 [2024-12-05T14:13:14.153Z] ====================================== 00:07:08.505 [2024-12-05T14:13:14.153Z] poller_cost: 409 (cyc), 185 (nsec) 00:07:08.505 00:07:08.505 real 0m1.307s 00:07:08.505 user 0m1.133s 00:07:08.505 sys 0m0.066s 00:07:08.505 14:13:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.505 ************************************ 00:07:08.505 END TEST thread_poller_perf 00:07:08.505 ************************************ 00:07:08.505 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.505 14:13:14 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:08.505 00:07:08.505 real 0m2.900s 00:07:08.505 user 0m2.397s 00:07:08.505 sys 0m0.282s 00:07:08.505 ************************************ 00:07:08.505 END TEST thread 00:07:08.505 ************************************ 00:07:08.505 14:13:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.505 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.505 14:13:14 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:08.505 14:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.505 14:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.505 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.506 ************************************ 00:07:08.506 START TEST accel 00:07:08.506 ************************************ 00:07:08.506 14:13:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:08.764 * Looking for test storage... 00:07:08.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:08.764 14:13:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:08.764 14:13:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:08.764 14:13:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:08.764 14:13:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:08.764 14:13:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:08.764 14:13:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:08.764 14:13:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:08.764 14:13:14 -- scripts/common.sh@335 -- # IFS=.-: 00:07:08.764 14:13:14 -- scripts/common.sh@335 -- # read -ra ver1 00:07:08.764 14:13:14 -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.764 14:13:14 -- scripts/common.sh@336 -- # read -ra ver2 00:07:08.764 14:13:14 -- scripts/common.sh@337 -- # local 'op=<' 00:07:08.764 14:13:14 -- scripts/common.sh@339 -- # ver1_l=2 00:07:08.764 14:13:14 -- scripts/common.sh@340 -- # ver2_l=1 00:07:08.764 14:13:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:08.764 14:13:14 -- scripts/common.sh@343 -- # case "$op" in 00:07:08.764 14:13:14 -- scripts/common.sh@344 -- # : 1 00:07:08.764 14:13:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:08.764 14:13:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.764 14:13:14 -- scripts/common.sh@364 -- # decimal 1 00:07:08.764 14:13:14 -- scripts/common.sh@352 -- # local d=1 00:07:08.764 14:13:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.764 14:13:14 -- scripts/common.sh@354 -- # echo 1 00:07:08.764 14:13:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:08.764 14:13:14 -- scripts/common.sh@365 -- # decimal 2 00:07:08.764 14:13:14 -- scripts/common.sh@352 -- # local d=2 00:07:08.764 14:13:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.764 14:13:14 -- scripts/common.sh@354 -- # echo 2 00:07:08.764 14:13:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:08.764 14:13:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:08.764 14:13:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:08.764 14:13:14 -- scripts/common.sh@367 -- # return 0 00:07:08.764 14:13:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.764 14:13:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.764 --rc genhtml_branch_coverage=1 00:07:08.764 --rc genhtml_function_coverage=1 00:07:08.764 --rc genhtml_legend=1 00:07:08.764 --rc geninfo_all_blocks=1 00:07:08.764 --rc geninfo_unexecuted_blocks=1 00:07:08.764 00:07:08.764 ' 00:07:08.764 14:13:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.764 --rc genhtml_branch_coverage=1 00:07:08.764 --rc genhtml_function_coverage=1 00:07:08.764 --rc genhtml_legend=1 00:07:08.764 --rc geninfo_all_blocks=1 00:07:08.764 --rc geninfo_unexecuted_blocks=1 00:07:08.764 00:07:08.764 ' 00:07:08.764 14:13:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.764 --rc genhtml_branch_coverage=1 00:07:08.764 --rc genhtml_function_coverage=1 00:07:08.764 --rc genhtml_legend=1 00:07:08.764 --rc geninfo_all_blocks=1 00:07:08.764 --rc geninfo_unexecuted_blocks=1 00:07:08.764 00:07:08.764 ' 00:07:08.764 14:13:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.764 --rc genhtml_branch_coverage=1 00:07:08.764 --rc genhtml_function_coverage=1 00:07:08.764 --rc genhtml_legend=1 00:07:08.764 --rc geninfo_all_blocks=1 00:07:08.764 --rc geninfo_unexecuted_blocks=1 00:07:08.764 00:07:08.764 ' 00:07:08.764 14:13:14 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:08.764 14:13:14 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:08.764 14:13:14 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.764 14:13:14 -- accel/accel.sh@59 -- # spdk_tgt_pid=70297 00:07:08.764 14:13:14 -- accel/accel.sh@60 -- # waitforlisten 70297 00:07:08.764 14:13:14 -- common/autotest_common.sh@829 -- # '[' -z 70297 ']' 00:07:08.764 14:13:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.764 14:13:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.765 14:13:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.765 14:13:14 -- accel/accel.sh@58 -- # build_accel_config 00:07:08.765 14:13:14 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:08.765 14:13:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.765 14:13:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.765 14:13:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.765 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.765 14:13:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.765 14:13:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.765 14:13:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.765 14:13:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.765 14:13:14 -- accel/accel.sh@42 -- # jq -r . 00:07:08.765 [2024-12-05 14:13:14.401546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.765 [2024-12-05 14:13:14.401665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70297 ] 00:07:09.023 [2024-12-05 14:13:14.540829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.023 [2024-12-05 14:13:14.613762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.023 [2024-12-05 14:13:14.613964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.959 14:13:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.959 14:13:15 -- common/autotest_common.sh@862 -- # return 0 00:07:09.959 14:13:15 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:09.959 14:13:15 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:09.959 14:13:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.959 14:13:15 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:09.959 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:07:09.959 14:13:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # IFS== 00:07:09.959 14:13:15 -- accel/accel.sh@64 -- # read -r opc module 00:07:09.959 14:13:15 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:09.959 14:13:15 -- accel/accel.sh@67 -- # killprocess 70297 00:07:09.959 14:13:15 -- common/autotest_common.sh@936 -- # '[' -z 70297 ']' 00:07:09.959 14:13:15 -- common/autotest_common.sh@940 -- # kill -0 70297 00:07:09.959 14:13:15 -- common/autotest_common.sh@941 -- # uname 00:07:09.959 14:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.959 14:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70297 00:07:09.959 14:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.959 14:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.959 killing process with pid 70297 00:07:09.959 14:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70297' 00:07:09.959 14:13:15 -- common/autotest_common.sh@955 -- # kill 70297 00:07:09.959 14:13:15 -- common/autotest_common.sh@960 -- # wait 70297 00:07:10.526 14:13:16 -- accel/accel.sh@68 -- # trap - ERR 00:07:10.526 14:13:16 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:10.526 14:13:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.526 14:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.526 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.526 14:13:16 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:10.526 14:13:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.526 14:13:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:10.526 14:13:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.526 14:13:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.526 14:13:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.526 14:13:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.526 14:13:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.526 14:13:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.526 14:13:16 -- accel/accel.sh@42 -- # jq -r . 00:07:10.526 14:13:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.526 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.526 14:13:16 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:10.526 14:13:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:10.526 14:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.526 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.526 ************************************ 00:07:10.526 START TEST accel_missing_filename 00:07:10.526 ************************************ 00:07:10.526 14:13:16 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:10.526 14:13:16 -- common/autotest_common.sh@650 -- # local es=0 00:07:10.526 14:13:16 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:10.526 14:13:16 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:10.526 14:13:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.526 14:13:16 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:10.526 14:13:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.526 14:13:16 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:10.526 14:13:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:10.526 14:13:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.526 14:13:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.526 14:13:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.526 14:13:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.526 14:13:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.526 14:13:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.526 14:13:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.526 14:13:16 -- accel/accel.sh@42 -- # jq -r . 00:07:10.526 [2024-12-05 14:13:16.130142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.526 [2024-12-05 14:13:16.130238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70374 ] 00:07:10.785 [2024-12-05 14:13:16.266624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.785 [2024-12-05 14:13:16.338125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.785 [2024-12-05 14:13:16.414073] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.044 [2024-12-05 14:13:16.528150] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:11.044 A filename is required. 00:07:11.044 14:13:16 -- common/autotest_common.sh@653 -- # es=234 00:07:11.044 14:13:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.044 14:13:16 -- common/autotest_common.sh@662 -- # es=106 00:07:11.044 14:13:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:11.044 14:13:16 -- common/autotest_common.sh@670 -- # es=1 00:07:11.044 14:13:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.044 00:07:11.044 real 0m0.533s 00:07:11.044 user 0m0.324s 00:07:11.044 sys 0m0.155s 00:07:11.044 14:13:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.044 ************************************ 00:07:11.044 END TEST accel_missing_filename 00:07:11.044 ************************************ 00:07:11.044 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:07:11.044 14:13:16 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.044 14:13:16 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:11.044 14:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.044 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:07:11.303 ************************************ 00:07:11.303 START TEST accel_compress_verify 00:07:11.303 ************************************ 00:07:11.303 14:13:16 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.303 14:13:16 -- common/autotest_common.sh@650 -- # local es=0 00:07:11.303 14:13:16 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.303 14:13:16 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:11.303 14:13:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.303 14:13:16 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:11.303 14:13:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.303 14:13:16 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.303 14:13:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.303 14:13:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.303 14:13:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.303 14:13:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.303 14:13:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.303 14:13:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.303 14:13:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.303 14:13:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.303 14:13:16 -- accel/accel.sh@42 -- # jq -r . 00:07:11.303 [2024-12-05 14:13:16.722001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.303 [2024-12-05 14:13:16.722120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70404 ] 00:07:11.303 [2024-12-05 14:13:16.862005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.303 [2024-12-05 14:13:16.930023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.562 [2024-12-05 14:13:16.997949] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.562 [2024-12-05 14:13:17.089626] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:11.562 00:07:11.562 Compression does not support the verify option, aborting. 00:07:11.562 14:13:17 -- common/autotest_common.sh@653 -- # es=161 00:07:11.562 14:13:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.562 14:13:17 -- common/autotest_common.sh@662 -- # es=33 00:07:11.562 14:13:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:11.562 14:13:17 -- common/autotest_common.sh@670 -- # es=1 00:07:11.562 14:13:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.562 00:07:11.562 real 0m0.474s 00:07:11.562 user 0m0.295s 00:07:11.562 sys 0m0.128s 00:07:11.562 14:13:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.562 ************************************ 00:07:11.562 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:11.562 END TEST accel_compress_verify 00:07:11.562 ************************************ 00:07:11.821 14:13:17 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:11.821 14:13:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.821 14:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.821 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:11.821 ************************************ 00:07:11.821 START TEST accel_wrong_workload 00:07:11.821 ************************************ 00:07:11.821 14:13:17 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:11.821 14:13:17 -- common/autotest_common.sh@650 -- # local es=0 00:07:11.821 14:13:17 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:11.821 14:13:17 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:11.821 14:13:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.821 14:13:17 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:11.821 14:13:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.821 14:13:17 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:11.821 14:13:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:11.821 14:13:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.821 14:13:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.821 14:13:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.821 14:13:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.821 14:13:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.821 14:13:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.821 14:13:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.821 14:13:17 -- accel/accel.sh@42 -- # jq -r . 00:07:11.821 Unsupported workload type: foobar 00:07:11.821 [2024-12-05 14:13:17.248892] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:11.821 accel_perf options: 00:07:11.821 [-h help message] 00:07:11.821 [-q queue depth per core] 00:07:11.821 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:11.821 [-T number of threads per core 00:07:11.821 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:11.821 [-t time in seconds] 00:07:11.821 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:11.821 [ dif_verify, , dif_generate, dif_generate_copy 00:07:11.821 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:11.821 [-l for compress/decompress workloads, name of uncompressed input file 00:07:11.821 [-S for crc32c workload, use this seed value (default 0) 00:07:11.821 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:11.821 [-f for fill workload, use this BYTE value (default 255) 00:07:11.821 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:11.821 [-y verify result if this switch is on] 00:07:11.821 [-a tasks to allocate per core (default: same value as -q)] 00:07:11.821 Can be used to spread operations across a wider range of memory. 00:07:11.821 14:13:17 -- common/autotest_common.sh@653 -- # es=1 00:07:11.821 14:13:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.821 14:13:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.821 14:13:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.821 00:07:11.821 real 0m0.033s 00:07:11.821 user 0m0.012s 00:07:11.821 sys 0m0.020s 00:07:11.821 14:13:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.821 ************************************ 00:07:11.821 END TEST accel_wrong_workload 00:07:11.821 ************************************ 00:07:11.821 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:11.821 14:13:17 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:11.821 14:13:17 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:11.821 14:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.821 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:11.821 ************************************ 00:07:11.821 START TEST accel_negative_buffers 00:07:11.821 ************************************ 00:07:11.821 14:13:17 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:11.821 14:13:17 -- common/autotest_common.sh@650 -- # local es=0 00:07:11.821 14:13:17 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:11.821 14:13:17 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:11.821 14:13:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.821 14:13:17 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:11.822 14:13:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.822 14:13:17 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:11.822 14:13:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:11.822 14:13:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.822 14:13:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.822 14:13:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.822 14:13:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.822 14:13:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.822 14:13:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.822 14:13:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.822 14:13:17 -- accel/accel.sh@42 -- # jq -r . 00:07:11.822 -x option must be non-negative. 00:07:11.822 [2024-12-05 14:13:17.330103] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:11.822 accel_perf options: 00:07:11.822 [-h help message] 00:07:11.822 [-q queue depth per core] 00:07:11.822 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:11.822 [-T number of threads per core 00:07:11.822 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:11.822 [-t time in seconds] 00:07:11.822 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:11.822 [ dif_verify, , dif_generate, dif_generate_copy 00:07:11.822 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:11.822 [-l for compress/decompress workloads, name of uncompressed input file 00:07:11.822 [-S for crc32c workload, use this seed value (default 0) 00:07:11.822 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:11.822 [-f for fill workload, use this BYTE value (default 255) 00:07:11.822 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:11.822 [-y verify result if this switch is on] 00:07:11.822 [-a tasks to allocate per core (default: same value as -q)] 00:07:11.822 Can be used to spread operations across a wider range of memory. 00:07:11.822 14:13:17 -- common/autotest_common.sh@653 -- # es=1 00:07:11.822 14:13:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.822 14:13:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.822 14:13:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.822 00:07:11.822 real 0m0.030s 00:07:11.822 user 0m0.012s 00:07:11.822 sys 0m0.018s 00:07:11.822 14:13:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.822 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:11.822 ************************************ 00:07:11.822 END TEST accel_negative_buffers 00:07:11.822 ************************************ 00:07:11.822 14:13:17 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:11.822 14:13:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:11.822 14:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.822 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:11.822 ************************************ 00:07:11.822 START TEST accel_crc32c 00:07:11.822 ************************************ 00:07:11.822 14:13:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:11.822 14:13:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.822 14:13:17 -- accel/accel.sh@17 -- # local accel_module 00:07:11.822 14:13:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:11.822 14:13:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:11.822 14:13:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.822 14:13:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.822 14:13:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.822 14:13:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.822 14:13:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.822 14:13:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.822 14:13:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.822 14:13:17 -- accel/accel.sh@42 -- # jq -r . 00:07:11.822 [2024-12-05 14:13:17.411359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.822 [2024-12-05 14:13:17.411448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70457 ] 00:07:12.081 [2024-12-05 14:13:17.550900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.081 [2024-12-05 14:13:17.630588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.458 14:13:18 -- accel/accel.sh@18 -- # out=' 00:07:13.458 SPDK Configuration: 00:07:13.458 Core mask: 0x1 00:07:13.458 00:07:13.458 Accel Perf Configuration: 00:07:13.458 Workload Type: crc32c 00:07:13.458 CRC-32C seed: 32 00:07:13.458 Transfer size: 4096 bytes 00:07:13.458 Vector count 1 00:07:13.458 Module: software 00:07:13.458 Queue depth: 32 00:07:13.458 Allocate depth: 32 00:07:13.458 # threads/core: 1 00:07:13.458 Run time: 1 seconds 00:07:13.458 Verify: Yes 00:07:13.458 00:07:13.458 Running for 1 seconds... 00:07:13.458 00:07:13.458 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.458 ------------------------------------------------------------------------------------ 00:07:13.458 0,0 558368/s 2181 MiB/s 0 0 00:07:13.458 ==================================================================================== 00:07:13.458 Total 558368/s 2181 MiB/s 0 0' 00:07:13.458 14:13:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.458 14:13:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.458 14:13:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:13.458 14:13:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.458 14:13:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:13.458 14:13:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.458 14:13:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.458 14:13:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.458 14:13:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.458 14:13:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.458 14:13:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.458 14:13:18 -- accel/accel.sh@42 -- # jq -r . 00:07:13.458 [2024-12-05 14:13:18.844735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.458 [2024-12-05 14:13:18.844858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70482 ] 00:07:13.458 [2024-12-05 14:13:18.981126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.458 [2024-12-05 14:13:19.032304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.458 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.458 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.458 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.458 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.458 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.458 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.458 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.458 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.458 14:13:19 -- accel/accel.sh@21 -- # val=0x1 00:07:13.458 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.458 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.458 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.458 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.458 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=crc32c 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=32 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=software 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=32 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=32 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=1 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val=Yes 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:13.459 14:13:19 -- accel/accel.sh@21 -- # val= 00:07:13.459 14:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # IFS=: 00:07:13.459 14:13:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@21 -- # val= 00:07:14.836 14:13:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@21 -- # val= 00:07:14.836 14:13:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@21 -- # val= 00:07:14.836 14:13:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@21 -- # val= 00:07:14.836 14:13:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@21 -- # val= 00:07:14.836 14:13:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@21 -- # val= 00:07:14.836 14:13:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.836 14:13:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.836 14:13:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.836 14:13:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:14.836 14:13:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.836 00:07:14.836 real 0m2.833s 00:07:14.836 user 0m2.411s 00:07:14.836 sys 0m0.223s 00:07:14.836 14:13:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.836 ************************************ 00:07:14.836 END TEST accel_crc32c 00:07:14.836 ************************************ 00:07:14.836 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 14:13:20 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:14.836 14:13:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:14.836 14:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.836 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 ************************************ 00:07:14.836 START TEST accel_crc32c_C2 00:07:14.836 ************************************ 00:07:14.836 14:13:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:14.836 14:13:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.836 14:13:20 -- accel/accel.sh@17 -- # local accel_module 00:07:14.836 14:13:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:14.836 14:13:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:14.836 14:13:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.836 14:13:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.836 14:13:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.836 14:13:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.836 14:13:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.836 14:13:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.836 14:13:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.836 14:13:20 -- accel/accel.sh@42 -- # jq -r . 00:07:14.836 [2024-12-05 14:13:20.296879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.836 [2024-12-05 14:13:20.296954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:07:14.836 [2024-12-05 14:13:20.425275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.836 [2024-12-05 14:13:20.476784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.214 14:13:21 -- accel/accel.sh@18 -- # out=' 00:07:16.214 SPDK Configuration: 00:07:16.214 Core mask: 0x1 00:07:16.214 00:07:16.214 Accel Perf Configuration: 00:07:16.214 Workload Type: crc32c 00:07:16.214 CRC-32C seed: 0 00:07:16.214 Transfer size: 4096 bytes 00:07:16.214 Vector count 2 00:07:16.214 Module: software 00:07:16.214 Queue depth: 32 00:07:16.214 Allocate depth: 32 00:07:16.214 # threads/core: 1 00:07:16.214 Run time: 1 seconds 00:07:16.214 Verify: Yes 00:07:16.214 00:07:16.214 Running for 1 seconds... 00:07:16.214 00:07:16.214 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.214 ------------------------------------------------------------------------------------ 00:07:16.214 0,0 430368/s 3362 MiB/s 0 0 00:07:16.214 ==================================================================================== 00:07:16.214 Total 430368/s 1681 MiB/s 0 0' 00:07:16.214 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.214 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.214 14:13:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:16.214 14:13:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.214 14:13:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:16.214 14:13:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.214 14:13:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.214 14:13:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.214 14:13:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.214 14:13:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.214 14:13:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.214 14:13:21 -- accel/accel.sh@42 -- # jq -r . 00:07:16.214 [2024-12-05 14:13:21.709276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.214 [2024-12-05 14:13:21.709382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70531 ] 00:07:16.214 [2024-12-05 14:13:21.845769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.473 [2024-12-05 14:13:21.903395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=0x1 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=crc32c 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=0 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=software 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=32 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=32 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=1 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val=Yes 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.473 14:13:21 -- accel/accel.sh@21 -- # val= 00:07:16.473 14:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.473 14:13:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@21 -- # val= 00:07:17.851 14:13:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@21 -- # val= 00:07:17.851 14:13:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@21 -- # val= 00:07:17.851 14:13:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@21 -- # val= 00:07:17.851 14:13:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@21 -- # val= 00:07:17.851 14:13:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@21 -- # val= 00:07:17.851 14:13:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.851 14:13:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.851 14:13:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.851 14:13:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:17.851 14:13:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.851 00:07:17.851 real 0m2.818s 00:07:17.851 user 0m2.398s 00:07:17.851 sys 0m0.218s 00:07:17.851 ************************************ 00:07:17.851 END TEST accel_crc32c_C2 00:07:17.851 ************************************ 00:07:17.851 14:13:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.851 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.851 14:13:23 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:17.851 14:13:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:17.851 14:13:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.851 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.851 ************************************ 00:07:17.851 START TEST accel_copy 00:07:17.851 ************************************ 00:07:17.851 14:13:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:17.851 14:13:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.851 14:13:23 -- accel/accel.sh@17 -- # local accel_module 00:07:17.851 14:13:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:17.851 14:13:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:17.851 14:13:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.851 14:13:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.851 14:13:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.851 14:13:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.851 14:13:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.851 14:13:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.851 14:13:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.851 14:13:23 -- accel/accel.sh@42 -- # jq -r . 00:07:17.851 [2024-12-05 14:13:23.166707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.851 [2024-12-05 14:13:23.166777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70565 ] 00:07:17.851 [2024-12-05 14:13:23.294731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.851 [2024-12-05 14:13:23.350443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.229 14:13:24 -- accel/accel.sh@18 -- # out=' 00:07:19.229 SPDK Configuration: 00:07:19.229 Core mask: 0x1 00:07:19.229 00:07:19.229 Accel Perf Configuration: 00:07:19.229 Workload Type: copy 00:07:19.229 Transfer size: 4096 bytes 00:07:19.229 Vector count 1 00:07:19.229 Module: software 00:07:19.229 Queue depth: 32 00:07:19.229 Allocate depth: 32 00:07:19.229 # threads/core: 1 00:07:19.229 Run time: 1 seconds 00:07:19.229 Verify: Yes 00:07:19.229 00:07:19.229 Running for 1 seconds... 00:07:19.229 00:07:19.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.229 ------------------------------------------------------------------------------------ 00:07:19.229 0,0 389632/s 1522 MiB/s 0 0 00:07:19.229 ==================================================================================== 00:07:19.229 Total 389632/s 1522 MiB/s 0 0' 00:07:19.229 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.229 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.229 14:13:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:19.229 14:13:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:19.229 14:13:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.229 14:13:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.229 14:13:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.230 14:13:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.230 14:13:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.230 14:13:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.230 14:13:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.230 14:13:24 -- accel/accel.sh@42 -- # jq -r . 00:07:19.230 [2024-12-05 14:13:24.577833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.230 [2024-12-05 14:13:24.578145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70579 ] 00:07:19.230 [2024-12-05 14:13:24.716682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.230 [2024-12-05 14:13:24.780728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=0x1 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=copy 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=software 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=32 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=32 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=1 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val=Yes 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.230 14:13:24 -- accel/accel.sh@21 -- # val= 00:07:19.230 14:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.230 14:13:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@21 -- # val= 00:07:20.621 14:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@21 -- # val= 00:07:20.621 14:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@21 -- # val= 00:07:20.621 14:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@21 -- # val= 00:07:20.621 14:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@21 -- # val= 00:07:20.621 14:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@21 -- # val= 00:07:20.621 14:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.621 ************************************ 00:07:20.621 END TEST accel_copy 00:07:20.621 ************************************ 00:07:20.621 14:13:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.621 14:13:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.621 14:13:25 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:20.622 14:13:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.622 00:07:20.622 real 0m2.847s 00:07:20.622 user 0m2.422s 00:07:20.622 sys 0m0.222s 00:07:20.622 14:13:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.622 14:13:25 -- common/autotest_common.sh@10 -- # set +x 00:07:20.622 14:13:26 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.622 14:13:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:20.622 14:13:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.622 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:07:20.622 ************************************ 00:07:20.622 START TEST accel_fill 00:07:20.622 ************************************ 00:07:20.622 14:13:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.622 14:13:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.622 14:13:26 -- accel/accel.sh@17 -- # local accel_module 00:07:20.622 14:13:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.622 14:13:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.622 14:13:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.622 14:13:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.622 14:13:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.622 14:13:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.622 14:13:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.622 14:13:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.622 14:13:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.622 14:13:26 -- accel/accel.sh@42 -- # jq -r . 00:07:20.622 [2024-12-05 14:13:26.073783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.622 [2024-12-05 14:13:26.073895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:07:20.622 [2024-12-05 14:13:26.210312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.880 [2024-12-05 14:13:26.270155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.815 14:13:27 -- accel/accel.sh@18 -- # out=' 00:07:21.815 SPDK Configuration: 00:07:21.815 Core mask: 0x1 00:07:21.815 00:07:21.815 Accel Perf Configuration: 00:07:21.815 Workload Type: fill 00:07:21.815 Fill pattern: 0x80 00:07:21.815 Transfer size: 4096 bytes 00:07:21.815 Vector count 1 00:07:21.815 Module: software 00:07:21.815 Queue depth: 64 00:07:21.815 Allocate depth: 64 00:07:21.815 # threads/core: 1 00:07:21.815 Run time: 1 seconds 00:07:21.815 Verify: Yes 00:07:21.815 00:07:21.815 Running for 1 seconds... 00:07:21.815 00:07:21.815 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.815 ------------------------------------------------------------------------------------ 00:07:21.815 0,0 561344/s 2192 MiB/s 0 0 00:07:21.815 ==================================================================================== 00:07:21.815 Total 561344/s 2192 MiB/s 0 0' 00:07:21.815 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.815 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.815 14:13:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.075 14:13:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.075 14:13:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.075 14:13:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.075 14:13:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.075 14:13:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.075 14:13:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.075 14:13:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.075 14:13:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.075 14:13:27 -- accel/accel.sh@42 -- # jq -r . 00:07:22.075 [2024-12-05 14:13:27.476127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.075 [2024-12-05 14:13:27.476216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70635 ] 00:07:22.075 [2024-12-05 14:13:27.598539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.075 [2024-12-05 14:13:27.648034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=0x1 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=fill 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=0x80 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=software 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=64 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=64 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=1 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val=Yes 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.075 14:13:27 -- accel/accel.sh@21 -- # val= 00:07:22.075 14:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.075 14:13:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@21 -- # val= 00:07:23.455 14:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@21 -- # val= 00:07:23.455 14:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@21 -- # val= 00:07:23.455 14:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@21 -- # val= 00:07:23.455 14:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@21 -- # val= 00:07:23.455 14:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@21 -- # val= 00:07:23.455 14:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.455 14:13:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.455 14:13:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.455 14:13:28 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:23.455 14:13:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.455 00:07:23.455 real 0m2.797s 00:07:23.455 user 0m2.367s 00:07:23.455 sys 0m0.228s 00:07:23.455 ************************************ 00:07:23.455 END TEST accel_fill 00:07:23.455 ************************************ 00:07:23.455 14:13:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.455 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:07:23.455 14:13:28 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:23.455 14:13:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:23.455 14:13:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.455 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:07:23.455 ************************************ 00:07:23.455 START TEST accel_copy_crc32c 00:07:23.455 ************************************ 00:07:23.455 14:13:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:23.455 14:13:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.455 14:13:28 -- accel/accel.sh@17 -- # local accel_module 00:07:23.455 14:13:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:23.455 14:13:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:23.455 14:13:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.455 14:13:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.455 14:13:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.455 14:13:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.455 14:13:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.455 14:13:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.455 14:13:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.455 14:13:28 -- accel/accel.sh@42 -- # jq -r . 00:07:23.455 [2024-12-05 14:13:28.921557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.455 [2024-12-05 14:13:28.921658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70670 ] 00:07:23.455 [2024-12-05 14:13:29.057793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.715 [2024-12-05 14:13:29.117455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.091 14:13:30 -- accel/accel.sh@18 -- # out=' 00:07:25.091 SPDK Configuration: 00:07:25.091 Core mask: 0x1 00:07:25.091 00:07:25.091 Accel Perf Configuration: 00:07:25.091 Workload Type: copy_crc32c 00:07:25.091 CRC-32C seed: 0 00:07:25.091 Vector size: 4096 bytes 00:07:25.091 Transfer size: 4096 bytes 00:07:25.091 Vector count 1 00:07:25.091 Module: software 00:07:25.091 Queue depth: 32 00:07:25.091 Allocate depth: 32 00:07:25.091 # threads/core: 1 00:07:25.091 Run time: 1 seconds 00:07:25.091 Verify: Yes 00:07:25.091 00:07:25.091 Running for 1 seconds... 00:07:25.091 00:07:25.091 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.091 ------------------------------------------------------------------------------------ 00:07:25.091 0,0 307072/s 1199 MiB/s 0 0 00:07:25.091 ==================================================================================== 00:07:25.091 Total 307072/s 1199 MiB/s 0 0' 00:07:25.091 14:13:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:25.091 14:13:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.091 14:13:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.091 14:13:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.091 14:13:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.091 14:13:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.091 14:13:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.091 14:13:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.091 14:13:30 -- accel/accel.sh@42 -- # jq -r . 00:07:25.091 [2024-12-05 14:13:30.331577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.091 [2024-12-05 14:13:30.332162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70691 ] 00:07:25.091 [2024-12-05 14:13:30.466917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.091 [2024-12-05 14:13:30.520127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=0x1 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=0 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=software 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=32 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=32 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=1 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val=Yes 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.091 14:13:30 -- accel/accel.sh@21 -- # val= 00:07:25.091 14:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.091 14:13:30 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 14:13:31 -- accel/accel.sh@21 -- # val= 00:07:26.580 14:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 14:13:31 -- accel/accel.sh@21 -- # val= 00:07:26.580 14:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 14:13:31 -- accel/accel.sh@21 -- # val= 00:07:26.580 14:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 14:13:31 -- accel/accel.sh@21 -- # val= 00:07:26.580 14:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 ************************************ 00:07:26.580 END TEST accel_copy_crc32c 00:07:26.580 ************************************ 00:07:26.580 14:13:31 -- accel/accel.sh@21 -- # val= 00:07:26.580 14:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 14:13:31 -- accel/accel.sh@21 -- # val= 00:07:26.580 14:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.580 14:13:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.580 14:13:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.580 14:13:31 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:26.580 14:13:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.580 00:07:26.580 real 0m2.811s 00:07:26.580 user 0m2.393s 00:07:26.580 sys 0m0.217s 00:07:26.580 14:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.580 14:13:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.580 14:13:31 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:26.580 14:13:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:26.580 14:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.580 14:13:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.580 ************************************ 00:07:26.580 START TEST accel_copy_crc32c_C2 00:07:26.580 ************************************ 00:07:26.580 14:13:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:26.580 14:13:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.580 14:13:31 -- accel/accel.sh@17 -- # local accel_module 00:07:26.580 14:13:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:26.580 14:13:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:26.580 14:13:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.580 14:13:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.580 14:13:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.580 14:13:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.580 14:13:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.580 14:13:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.580 14:13:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.580 14:13:31 -- accel/accel.sh@42 -- # jq -r . 00:07:26.580 [2024-12-05 14:13:31.788950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.580 [2024-12-05 14:13:31.789063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70720 ] 00:07:26.580 [2024-12-05 14:13:31.925662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.580 [2024-12-05 14:13:31.988444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.547 14:13:33 -- accel/accel.sh@18 -- # out=' 00:07:27.547 SPDK Configuration: 00:07:27.547 Core mask: 0x1 00:07:27.547 00:07:27.547 Accel Perf Configuration: 00:07:27.547 Workload Type: copy_crc32c 00:07:27.547 CRC-32C seed: 0 00:07:27.547 Vector size: 4096 bytes 00:07:27.547 Transfer size: 8192 bytes 00:07:27.547 Vector count 2 00:07:27.547 Module: software 00:07:27.547 Queue depth: 32 00:07:27.547 Allocate depth: 32 00:07:27.547 # threads/core: 1 00:07:27.547 Run time: 1 seconds 00:07:27.547 Verify: Yes 00:07:27.547 00:07:27.547 Running for 1 seconds... 00:07:27.547 00:07:27.547 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.547 ------------------------------------------------------------------------------------ 00:07:27.547 0,0 221504/s 1730 MiB/s 0 0 00:07:27.547 ==================================================================================== 00:07:27.547 Total 221504/s 865 MiB/s 0 0' 00:07:27.547 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.547 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.547 14:13:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:27.547 14:13:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:27.547 14:13:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.547 14:13:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.547 14:13:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.547 14:13:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.547 14:13:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.547 14:13:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.547 14:13:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.547 14:13:33 -- accel/accel.sh@42 -- # jq -r . 00:07:27.805 [2024-12-05 14:13:33.207827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.805 [2024-12-05 14:13:33.208719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:07:27.805 [2024-12-05 14:13:33.346190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.805 [2024-12-05 14:13:33.403677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.064 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.064 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.064 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.064 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.064 14:13:33 -- accel/accel.sh@21 -- # val=0x1 00:07:28.064 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.064 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.064 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.064 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.064 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.064 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=0 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=software 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=32 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=32 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=1 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val=Yes 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:28.065 14:13:33 -- accel/accel.sh@21 -- # val= 00:07:28.065 14:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # IFS=: 00:07:28.065 14:13:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.000 14:13:34 -- accel/accel.sh@21 -- # val= 00:07:29.000 14:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.000 14:13:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.000 14:13:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.000 14:13:34 -- accel/accel.sh@21 -- # val= 00:07:29.001 14:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.001 14:13:34 -- accel/accel.sh@21 -- # val= 00:07:29.001 14:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.001 14:13:34 -- accel/accel.sh@21 -- # val= 00:07:29.001 14:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.001 14:13:34 -- accel/accel.sh@21 -- # val= 00:07:29.001 14:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.001 14:13:34 -- accel/accel.sh@21 -- # val= 00:07:29.001 14:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.001 14:13:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.001 14:13:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.001 14:13:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:29.001 14:13:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.001 00:07:29.001 real 0m2.830s 00:07:29.001 user 0m2.390s 00:07:29.001 sys 0m0.234s 00:07:29.001 14:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.001 ************************************ 00:07:29.001 END TEST accel_copy_crc32c_C2 00:07:29.001 ************************************ 00:07:29.001 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:07:29.001 14:13:34 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.001 14:13:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.001 14:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.001 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:07:29.259 ************************************ 00:07:29.259 START TEST accel_dualcast 00:07:29.259 ************************************ 00:07:29.259 14:13:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:29.259 14:13:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.259 14:13:34 -- accel/accel.sh@17 -- # local accel_module 00:07:29.259 14:13:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:29.259 14:13:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.259 14:13:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.259 14:13:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.259 14:13:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.259 14:13:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.259 14:13:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.259 14:13:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.259 14:13:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.259 14:13:34 -- accel/accel.sh@42 -- # jq -r . 00:07:29.259 [2024-12-05 14:13:34.680877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.259 [2024-12-05 14:13:34.680979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70774 ] 00:07:29.259 [2024-12-05 14:13:34.818212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.259 [2024-12-05 14:13:34.870771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.650 14:13:36 -- accel/accel.sh@18 -- # out=' 00:07:30.650 SPDK Configuration: 00:07:30.650 Core mask: 0x1 00:07:30.650 00:07:30.650 Accel Perf Configuration: 00:07:30.650 Workload Type: dualcast 00:07:30.650 Transfer size: 4096 bytes 00:07:30.650 Vector count 1 00:07:30.650 Module: software 00:07:30.650 Queue depth: 32 00:07:30.650 Allocate depth: 32 00:07:30.650 # threads/core: 1 00:07:30.650 Run time: 1 seconds 00:07:30.650 Verify: Yes 00:07:30.650 00:07:30.650 Running for 1 seconds... 00:07:30.650 00:07:30.651 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.651 ------------------------------------------------------------------------------------ 00:07:30.651 0,0 424480/s 1658 MiB/s 0 0 00:07:30.651 ==================================================================================== 00:07:30.651 Total 424480/s 1658 MiB/s 0 0' 00:07:30.651 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.651 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.651 14:13:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:30.651 14:13:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.651 14:13:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:30.651 14:13:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.651 14:13:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.651 14:13:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.651 14:13:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.651 14:13:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.651 14:13:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.651 14:13:36 -- accel/accel.sh@42 -- # jq -r . 00:07:30.651 [2024-12-05 14:13:36.099674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.651 [2024-12-05 14:13:36.099944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:07:30.651 [2024-12-05 14:13:36.237085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.651 [2024-12-05 14:13:36.293115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=0x1 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=dualcast 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=software 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=32 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=32 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=1 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val=Yes 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.909 14:13:36 -- accel/accel.sh@21 -- # val= 00:07:30.909 14:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.909 14:13:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@21 -- # val= 00:07:32.287 14:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # IFS=: 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@21 -- # val= 00:07:32.287 14:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # IFS=: 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@21 -- # val= 00:07:32.287 14:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # IFS=: 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@21 -- # val= 00:07:32.287 14:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # IFS=: 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@21 -- # val= 00:07:32.287 14:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # IFS=: 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@21 -- # val= 00:07:32.287 14:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # IFS=: 00:07:32.287 14:13:37 -- accel/accel.sh@20 -- # read -r var val 00:07:32.287 14:13:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.287 14:13:37 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:32.287 14:13:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.287 00:07:32.287 real 0m2.846s 00:07:32.287 user 0m2.425s 00:07:32.287 sys 0m0.218s 00:07:32.287 ************************************ 00:07:32.287 END TEST accel_dualcast 00:07:32.287 ************************************ 00:07:32.287 14:13:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.287 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:07:32.287 14:13:37 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:32.287 14:13:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.287 14:13:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.287 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:07:32.287 ************************************ 00:07:32.287 START TEST accel_compare 00:07:32.287 ************************************ 00:07:32.287 14:13:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:32.287 14:13:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.287 14:13:37 -- accel/accel.sh@17 -- # local accel_module 00:07:32.287 14:13:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:32.287 14:13:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:32.287 14:13:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.287 14:13:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.287 14:13:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.287 14:13:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.287 14:13:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.287 14:13:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.287 14:13:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.287 14:13:37 -- accel/accel.sh@42 -- # jq -r . 00:07:32.287 [2024-12-05 14:13:37.574285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.287 [2024-12-05 14:13:37.574376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70828 ] 00:07:32.287 [2024-12-05 14:13:37.710790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.287 [2024-12-05 14:13:37.766986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.661 14:13:38 -- accel/accel.sh@18 -- # out=' 00:07:33.661 SPDK Configuration: 00:07:33.661 Core mask: 0x1 00:07:33.661 00:07:33.661 Accel Perf Configuration: 00:07:33.661 Workload Type: compare 00:07:33.661 Transfer size: 4096 bytes 00:07:33.661 Vector count 1 00:07:33.661 Module: software 00:07:33.661 Queue depth: 32 00:07:33.661 Allocate depth: 32 00:07:33.661 # threads/core: 1 00:07:33.661 Run time: 1 seconds 00:07:33.661 Verify: Yes 00:07:33.661 00:07:33.661 Running for 1 seconds... 00:07:33.661 00:07:33.661 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.661 ------------------------------------------------------------------------------------ 00:07:33.661 0,0 562944/s 2199 MiB/s 0 0 00:07:33.661 ==================================================================================== 00:07:33.661 Total 562944/s 2199 MiB/s 0 0' 00:07:33.661 14:13:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:33.661 14:13:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:33.661 14:13:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.661 14:13:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.661 14:13:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.661 14:13:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.661 14:13:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.661 14:13:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.661 14:13:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.661 14:13:38 -- accel/accel.sh@42 -- # jq -r . 00:07:33.661 [2024-12-05 14:13:38.976604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.661 [2024-12-05 14:13:38.976687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70843 ] 00:07:33.661 [2024-12-05 14:13:39.113395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.661 [2024-12-05 14:13:39.168094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val=0x1 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val=compare 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.661 14:13:39 -- accel/accel.sh@21 -- # val=software 00:07:33.661 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.661 14:13:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.661 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val=32 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val=32 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val=1 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val=Yes 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:33.662 14:13:39 -- accel/accel.sh@21 -- # val= 00:07:33.662 14:13:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # IFS=: 00:07:33.662 14:13:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@21 -- # val= 00:07:35.034 14:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # IFS=: 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@21 -- # val= 00:07:35.034 14:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # IFS=: 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@21 -- # val= 00:07:35.034 14:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # IFS=: 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@21 -- # val= 00:07:35.034 14:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # IFS=: 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@21 -- # val= 00:07:35.034 14:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # IFS=: 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@21 -- # val= 00:07:35.034 14:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # IFS=: 00:07:35.034 14:13:40 -- accel/accel.sh@20 -- # read -r var val 00:07:35.034 14:13:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.034 14:13:40 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:35.034 ************************************ 00:07:35.034 END TEST accel_compare 00:07:35.034 ************************************ 00:07:35.034 14:13:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.034 00:07:35.034 real 0m2.816s 00:07:35.034 user 0m2.386s 00:07:35.034 sys 0m0.223s 00:07:35.034 14:13:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.034 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:35.034 14:13:40 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:35.034 14:13:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.034 14:13:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.034 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:35.034 ************************************ 00:07:35.034 START TEST accel_xor 00:07:35.034 ************************************ 00:07:35.034 14:13:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:35.034 14:13:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.034 14:13:40 -- accel/accel.sh@17 -- # local accel_module 00:07:35.034 14:13:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:35.034 14:13:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:35.034 14:13:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.034 14:13:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.034 14:13:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.034 14:13:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.034 14:13:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.034 14:13:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.034 14:13:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.034 14:13:40 -- accel/accel.sh@42 -- # jq -r . 00:07:35.034 [2024-12-05 14:13:40.447166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.034 [2024-12-05 14:13:40.447412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70878 ] 00:07:35.034 [2024-12-05 14:13:40.582591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.034 [2024-12-05 14:13:40.641951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.406 14:13:41 -- accel/accel.sh@18 -- # out=' 00:07:36.406 SPDK Configuration: 00:07:36.406 Core mask: 0x1 00:07:36.406 00:07:36.406 Accel Perf Configuration: 00:07:36.406 Workload Type: xor 00:07:36.406 Source buffers: 2 00:07:36.406 Transfer size: 4096 bytes 00:07:36.406 Vector count 1 00:07:36.406 Module: software 00:07:36.406 Queue depth: 32 00:07:36.406 Allocate depth: 32 00:07:36.406 # threads/core: 1 00:07:36.406 Run time: 1 seconds 00:07:36.406 Verify: Yes 00:07:36.406 00:07:36.406 Running for 1 seconds... 00:07:36.406 00:07:36.406 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.406 ------------------------------------------------------------------------------------ 00:07:36.406 0,0 293536/s 1146 MiB/s 0 0 00:07:36.406 ==================================================================================== 00:07:36.406 Total 293536/s 1146 MiB/s 0 0' 00:07:36.406 14:13:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:36.406 14:13:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.406 14:13:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.406 14:13:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:36.406 14:13:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.406 14:13:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.406 14:13:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.406 14:13:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.406 14:13:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.406 14:13:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.406 14:13:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.406 14:13:41 -- accel/accel.sh@42 -- # jq -r . 00:07:36.406 [2024-12-05 14:13:41.861957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.406 [2024-12-05 14:13:41.862048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:07:36.406 [2024-12-05 14:13:41.989741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.406 [2024-12-05 14:13:42.039772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.664 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.664 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.664 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.664 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.664 14:13:42 -- accel/accel.sh@21 -- # val=0x1 00:07:36.664 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.664 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.664 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.664 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=xor 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=2 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=software 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=32 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=32 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=1 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val=Yes 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:36.665 14:13:42 -- accel/accel.sh@21 -- # val= 00:07:36.665 14:13:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # IFS=: 00:07:36.665 14:13:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@21 -- # val= 00:07:37.601 14:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@21 -- # val= 00:07:37.601 14:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@21 -- # val= 00:07:37.601 14:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@21 -- # val= 00:07:37.601 14:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@21 -- # val= 00:07:37.601 14:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@21 -- # val= 00:07:37.601 14:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.601 14:13:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.601 14:13:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.601 14:13:43 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:37.601 14:13:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.601 00:07:37.601 real 0m2.816s 00:07:37.601 user 0m2.386s 00:07:37.601 sys 0m0.221s 00:07:37.601 14:13:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.601 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:37.601 ************************************ 00:07:37.601 END TEST accel_xor 00:07:37.601 ************************************ 00:07:37.861 14:13:43 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:37.861 14:13:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:37.861 14:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.861 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:37.861 ************************************ 00:07:37.861 START TEST accel_xor 00:07:37.861 ************************************ 00:07:37.861 14:13:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:37.861 14:13:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.861 14:13:43 -- accel/accel.sh@17 -- # local accel_module 00:07:37.861 14:13:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:37.861 14:13:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:37.861 14:13:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.861 14:13:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.861 14:13:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.861 14:13:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.861 14:13:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.861 14:13:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.861 14:13:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.861 14:13:43 -- accel/accel.sh@42 -- # jq -r . 00:07:37.861 [2024-12-05 14:13:43.315132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.861 [2024-12-05 14:13:43.315222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:07:37.861 [2024-12-05 14:13:43.453400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.120 [2024-12-05 14:13:43.511589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.057 14:13:44 -- accel/accel.sh@18 -- # out=' 00:07:39.057 SPDK Configuration: 00:07:39.057 Core mask: 0x1 00:07:39.057 00:07:39.057 Accel Perf Configuration: 00:07:39.057 Workload Type: xor 00:07:39.057 Source buffers: 3 00:07:39.057 Transfer size: 4096 bytes 00:07:39.057 Vector count 1 00:07:39.057 Module: software 00:07:39.057 Queue depth: 32 00:07:39.057 Allocate depth: 32 00:07:39.057 # threads/core: 1 00:07:39.057 Run time: 1 seconds 00:07:39.057 Verify: Yes 00:07:39.057 00:07:39.057 Running for 1 seconds... 00:07:39.057 00:07:39.057 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.057 ------------------------------------------------------------------------------------ 00:07:39.057 0,0 278400/s 1087 MiB/s 0 0 00:07:39.057 ==================================================================================== 00:07:39.057 Total 278400/s 1087 MiB/s 0 0' 00:07:39.057 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.057 14:13:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:39.057 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.057 14:13:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:39.057 14:13:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.057 14:13:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.057 14:13:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.057 14:13:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.057 14:13:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.057 14:13:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.057 14:13:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.057 14:13:44 -- accel/accel.sh@42 -- # jq -r . 00:07:39.316 [2024-12-05 14:13:44.724030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.316 [2024-12-05 14:13:44.724292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70946 ] 00:07:39.316 [2024-12-05 14:13:44.858703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.316 [2024-12-05 14:13:44.916871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=0x1 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=xor 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=3 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=software 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=32 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=32 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=1 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val=Yes 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.575 14:13:44 -- accel/accel.sh@21 -- # val= 00:07:39.575 14:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # IFS=: 00:07:39.575 14:13:44 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@21 -- # val= 00:07:40.511 14:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@21 -- # val= 00:07:40.511 14:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@21 -- # val= 00:07:40.511 14:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@21 -- # val= 00:07:40.511 14:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@21 -- # val= 00:07:40.511 14:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@21 -- # val= 00:07:40.511 14:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.511 14:13:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.511 14:13:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.511 14:13:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:40.511 14:13:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.511 00:07:40.511 real 0m2.835s 00:07:40.511 user 0m2.408s 00:07:40.511 sys 0m0.225s 00:07:40.511 14:13:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.511 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.511 ************************************ 00:07:40.511 END TEST accel_xor 00:07:40.511 ************************************ 00:07:40.770 14:13:46 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:40.770 14:13:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:40.770 14:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.770 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.770 ************************************ 00:07:40.770 START TEST accel_dif_verify 00:07:40.770 ************************************ 00:07:40.770 14:13:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:40.770 14:13:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.770 14:13:46 -- accel/accel.sh@17 -- # local accel_module 00:07:40.770 14:13:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:40.770 14:13:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:40.770 14:13:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.770 14:13:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.770 14:13:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.770 14:13:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.770 14:13:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.770 14:13:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.770 14:13:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.770 14:13:46 -- accel/accel.sh@42 -- # jq -r . 00:07:40.770 [2024-12-05 14:13:46.199947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.770 [2024-12-05 14:13:46.200218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70986 ] 00:07:40.770 [2024-12-05 14:13:46.323120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.770 [2024-12-05 14:13:46.379440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.147 14:13:47 -- accel/accel.sh@18 -- # out=' 00:07:42.147 SPDK Configuration: 00:07:42.147 Core mask: 0x1 00:07:42.147 00:07:42.147 Accel Perf Configuration: 00:07:42.147 Workload Type: dif_verify 00:07:42.147 Vector size: 4096 bytes 00:07:42.147 Transfer size: 4096 bytes 00:07:42.147 Block size: 512 bytes 00:07:42.147 Metadata size: 8 bytes 00:07:42.147 Vector count 1 00:07:42.147 Module: software 00:07:42.147 Queue depth: 32 00:07:42.147 Allocate depth: 32 00:07:42.147 # threads/core: 1 00:07:42.147 Run time: 1 seconds 00:07:42.147 Verify: No 00:07:42.147 00:07:42.147 Running for 1 seconds... 00:07:42.147 00:07:42.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.147 ------------------------------------------------------------------------------------ 00:07:42.147 0,0 125984/s 499 MiB/s 0 0 00:07:42.147 ==================================================================================== 00:07:42.147 Total 125984/s 492 MiB/s 0 0' 00:07:42.147 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.147 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.147 14:13:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:42.147 14:13:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:42.147 14:13:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.147 14:13:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.147 14:13:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.148 14:13:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.148 14:13:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.148 14:13:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.148 14:13:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.148 14:13:47 -- accel/accel.sh@42 -- # jq -r . 00:07:42.148 [2024-12-05 14:13:47.591579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.148 [2024-12-05 14:13:47.591664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71000 ] 00:07:42.148 [2024-12-05 14:13:47.731016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.407 [2024-12-05 14:13:47.794796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val=0x1 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val=dif_verify 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.407 14:13:47 -- accel/accel.sh@21 -- # val=software 00:07:42.407 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.407 14:13:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.407 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val=32 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val=32 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val=1 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val=No 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:42.408 14:13:47 -- accel/accel.sh@21 -- # val= 00:07:42.408 14:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # IFS=: 00:07:42.408 14:13:47 -- accel/accel.sh@20 -- # read -r var val 00:07:43.344 14:13:48 -- accel/accel.sh@21 -- # val= 00:07:43.344 14:13:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.344 14:13:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.344 14:13:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.344 14:13:48 -- accel/accel.sh@21 -- # val= 00:07:43.602 14:13:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.602 14:13:48 -- accel/accel.sh@21 -- # val= 00:07:43.602 14:13:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.602 14:13:48 -- accel/accel.sh@21 -- # val= 00:07:43.602 14:13:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.602 14:13:48 -- accel/accel.sh@21 -- # val= 00:07:43.602 14:13:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.602 14:13:48 -- accel/accel.sh@21 -- # val= 00:07:43.602 14:13:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.602 14:13:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.602 14:13:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.602 14:13:48 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:43.602 14:13:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.602 00:07:43.602 real 0m2.817s 00:07:43.602 user 0m2.389s 00:07:43.602 sys 0m0.229s 00:07:43.602 14:13:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.602 ************************************ 00:07:43.602 END TEST accel_dif_verify 00:07:43.602 ************************************ 00:07:43.602 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.602 14:13:49 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:43.602 14:13:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:43.602 14:13:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.602 14:13:49 -- common/autotest_common.sh@10 -- # set +x 00:07:43.602 ************************************ 00:07:43.602 START TEST accel_dif_generate 00:07:43.602 ************************************ 00:07:43.602 14:13:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:43.602 14:13:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.602 14:13:49 -- accel/accel.sh@17 -- # local accel_module 00:07:43.602 14:13:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:43.602 14:13:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:43.602 14:13:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.602 14:13:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.602 14:13:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.602 14:13:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.602 14:13:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.602 14:13:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.602 14:13:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.602 14:13:49 -- accel/accel.sh@42 -- # jq -r . 00:07:43.602 [2024-12-05 14:13:49.065722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.602 [2024-12-05 14:13:49.066001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71034 ] 00:07:43.602 [2024-12-05 14:13:49.199226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.861 [2024-12-05 14:13:49.258017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.796 14:13:50 -- accel/accel.sh@18 -- # out=' 00:07:44.796 SPDK Configuration: 00:07:44.796 Core mask: 0x1 00:07:44.796 00:07:44.796 Accel Perf Configuration: 00:07:44.796 Workload Type: dif_generate 00:07:44.796 Vector size: 4096 bytes 00:07:44.796 Transfer size: 4096 bytes 00:07:44.796 Block size: 512 bytes 00:07:44.796 Metadata size: 8 bytes 00:07:44.796 Vector count 1 00:07:44.796 Module: software 00:07:44.796 Queue depth: 32 00:07:44.796 Allocate depth: 32 00:07:44.796 # threads/core: 1 00:07:44.796 Run time: 1 seconds 00:07:44.796 Verify: No 00:07:44.796 00:07:44.796 Running for 1 seconds... 00:07:44.796 00:07:44.796 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.796 ------------------------------------------------------------------------------------ 00:07:44.796 0,0 151840/s 602 MiB/s 0 0 00:07:44.796 ==================================================================================== 00:07:44.796 Total 151840/s 593 MiB/s 0 0' 00:07:45.055 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.055 14:13:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:45.055 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.055 14:13:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:45.055 14:13:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.055 14:13:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.055 14:13:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.055 14:13:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.055 14:13:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.055 14:13:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.055 14:13:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.055 14:13:50 -- accel/accel.sh@42 -- # jq -r . 00:07:45.055 [2024-12-05 14:13:50.467388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.055 [2024-12-05 14:13:50.467488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:07:45.055 [2024-12-05 14:13:50.603635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.055 [2024-12-05 14:13:50.659361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=0x1 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=dif_generate 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=software 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=32 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=32 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=1 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val=No 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.314 14:13:50 -- accel/accel.sh@21 -- # val= 00:07:45.314 14:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # IFS=: 00:07:45.314 14:13:50 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@21 -- # val= 00:07:46.249 14:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@21 -- # val= 00:07:46.249 14:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@21 -- # val= 00:07:46.249 14:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@21 -- # val= 00:07:46.249 14:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@21 -- # val= 00:07:46.249 14:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@21 -- # val= 00:07:46.249 14:13:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.249 14:13:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.249 14:13:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.249 14:13:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:46.249 14:13:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.249 00:07:46.249 real 0m2.811s 00:07:46.249 user 0m2.368s 00:07:46.249 sys 0m0.235s 00:07:46.249 14:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.249 14:13:51 -- common/autotest_common.sh@10 -- # set +x 00:07:46.249 ************************************ 00:07:46.249 END TEST accel_dif_generate 00:07:46.249 ************************************ 00:07:46.508 14:13:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:46.508 14:13:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:46.508 14:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.508 14:13:51 -- common/autotest_common.sh@10 -- # set +x 00:07:46.508 ************************************ 00:07:46.508 START TEST accel_dif_generate_copy 00:07:46.508 ************************************ 00:07:46.508 14:13:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:46.508 14:13:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.508 14:13:51 -- accel/accel.sh@17 -- # local accel_module 00:07:46.508 14:13:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:46.508 14:13:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:46.508 14:13:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.508 14:13:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.508 14:13:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.508 14:13:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.508 14:13:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.508 14:13:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.508 14:13:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.508 14:13:51 -- accel/accel.sh@42 -- # jq -r . 00:07:46.508 [2024-12-05 14:13:51.932190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.508 [2024-12-05 14:13:51.932289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71083 ] 00:07:46.508 [2024-12-05 14:13:52.059657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.508 [2024-12-05 14:13:52.114787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.884 14:13:53 -- accel/accel.sh@18 -- # out=' 00:07:47.884 SPDK Configuration: 00:07:47.884 Core mask: 0x1 00:07:47.884 00:07:47.884 Accel Perf Configuration: 00:07:47.884 Workload Type: dif_generate_copy 00:07:47.884 Vector size: 4096 bytes 00:07:47.884 Transfer size: 4096 bytes 00:07:47.884 Vector count 1 00:07:47.884 Module: software 00:07:47.884 Queue depth: 32 00:07:47.884 Allocate depth: 32 00:07:47.884 # threads/core: 1 00:07:47.884 Run time: 1 seconds 00:07:47.884 Verify: No 00:07:47.884 00:07:47.884 Running for 1 seconds... 00:07:47.884 00:07:47.884 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.884 ------------------------------------------------------------------------------------ 00:07:47.884 0,0 116320/s 461 MiB/s 0 0 00:07:47.884 ==================================================================================== 00:07:47.884 Total 116320/s 454 MiB/s 0 0' 00:07:47.884 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:47.884 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:47.884 14:13:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:47.884 14:13:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.884 14:13:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.884 14:13:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:47.884 14:13:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.884 14:13:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.884 14:13:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.884 14:13:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.884 14:13:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.884 14:13:53 -- accel/accel.sh@42 -- # jq -r . 00:07:47.884 [2024-12-05 14:13:53.333987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.884 [2024-12-05 14:13:53.334079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71108 ] 00:07:47.884 [2024-12-05 14:13:53.470382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.884 [2024-12-05 14:13:53.523096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=0x1 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=software 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=32 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=32 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=1 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val=No 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:48.145 14:13:53 -- accel/accel.sh@21 -- # val= 00:07:48.145 14:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # IFS=: 00:07:48.145 14:13:53 -- accel/accel.sh@20 -- # read -r var val 00:07:49.080 14:13:54 -- accel/accel.sh@21 -- # val= 00:07:49.080 14:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.080 14:13:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.080 14:13:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.080 14:13:54 -- accel/accel.sh@21 -- # val= 00:07:49.081 14:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.081 14:13:54 -- accel/accel.sh@21 -- # val= 00:07:49.081 14:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.081 14:13:54 -- accel/accel.sh@21 -- # val= 00:07:49.081 14:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.081 14:13:54 -- accel/accel.sh@21 -- # val= 00:07:49.081 14:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.081 14:13:54 -- accel/accel.sh@21 -- # val= 00:07:49.081 14:13:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.081 14:13:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.081 14:13:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.081 14:13:54 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:49.081 14:13:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.081 00:07:49.081 real 0m2.803s 00:07:49.081 user 0m2.371s 00:07:49.081 sys 0m0.227s 00:07:49.081 14:13:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.081 14:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:49.081 ************************************ 00:07:49.081 END TEST accel_dif_generate_copy 00:07:49.081 ************************************ 00:07:49.340 14:13:54 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:49.340 14:13:54 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.340 14:13:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:49.340 14:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.340 14:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:49.340 ************************************ 00:07:49.340 START TEST accel_comp 00:07:49.340 ************************************ 00:07:49.340 14:13:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.340 14:13:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.340 14:13:54 -- accel/accel.sh@17 -- # local accel_module 00:07:49.340 14:13:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.340 14:13:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.340 14:13:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.340 14:13:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.340 14:13:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.340 14:13:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.340 14:13:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.340 14:13:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.340 14:13:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.340 14:13:54 -- accel/accel.sh@42 -- # jq -r . 00:07:49.340 [2024-12-05 14:13:54.787242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.340 [2024-12-05 14:13:54.787500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71137 ] 00:07:49.340 [2024-12-05 14:13:54.926042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.340 [2024-12-05 14:13:54.981958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.713 14:13:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.713 00:07:50.713 SPDK Configuration: 00:07:50.713 Core mask: 0x1 00:07:50.713 00:07:50.713 Accel Perf Configuration: 00:07:50.713 Workload Type: compress 00:07:50.713 Transfer size: 4096 bytes 00:07:50.713 Vector count 1 00:07:50.713 Module: software 00:07:50.713 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.713 Queue depth: 32 00:07:50.713 Allocate depth: 32 00:07:50.713 # threads/core: 1 00:07:50.713 Run time: 1 seconds 00:07:50.713 Verify: No 00:07:50.713 00:07:50.713 Running for 1 seconds... 00:07:50.713 00:07:50.713 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.713 ------------------------------------------------------------------------------------ 00:07:50.713 0,0 59840/s 249 MiB/s 0 0 00:07:50.713 ==================================================================================== 00:07:50.713 Total 59840/s 233 MiB/s 0 0' 00:07:50.713 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.713 14:13:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.713 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.713 14:13:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.713 14:13:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.714 14:13:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.714 14:13:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.714 14:13:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.714 14:13:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.714 14:13:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.714 14:13:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.714 14:13:56 -- accel/accel.sh@42 -- # jq -r . 00:07:50.714 [2024-12-05 14:13:56.194051] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.714 [2024-12-05 14:13:56.194135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71151 ] 00:07:50.714 [2024-12-05 14:13:56.328525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.972 [2024-12-05 14:13:56.391095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=0x1 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=compress 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=software 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=32 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=32 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=1 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val=No 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:50.972 14:13:56 -- accel/accel.sh@21 -- # val= 00:07:50.972 14:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # IFS=: 00:07:50.972 14:13:56 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@21 -- # val= 00:07:52.345 14:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@21 -- # val= 00:07:52.345 14:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@21 -- # val= 00:07:52.345 14:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@21 -- # val= 00:07:52.345 14:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@21 -- # val= 00:07:52.345 ************************************ 00:07:52.345 END TEST accel_comp 00:07:52.345 ************************************ 00:07:52.345 14:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@21 -- # val= 00:07:52.345 14:13:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.345 14:13:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.345 14:13:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.345 14:13:57 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:52.345 14:13:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.345 00:07:52.345 real 0m2.821s 00:07:52.345 user 0m2.390s 00:07:52.345 sys 0m0.228s 00:07:52.345 14:13:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.345 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:07:52.345 14:13:57 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.345 14:13:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:52.345 14:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.345 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:07:52.345 ************************************ 00:07:52.345 START TEST accel_decomp 00:07:52.345 ************************************ 00:07:52.345 14:13:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.345 14:13:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.345 14:13:57 -- accel/accel.sh@17 -- # local accel_module 00:07:52.345 14:13:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.345 14:13:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:52.345 14:13:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.345 14:13:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.345 14:13:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.345 14:13:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.345 14:13:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.345 14:13:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.345 14:13:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.345 14:13:57 -- accel/accel.sh@42 -- # jq -r . 00:07:52.345 [2024-12-05 14:13:57.659744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.345 [2024-12-05 14:13:57.659866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71191 ] 00:07:52.345 [2024-12-05 14:13:57.795987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.345 [2024-12-05 14:13:57.852128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.717 14:13:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.717 00:07:53.717 SPDK Configuration: 00:07:53.717 Core mask: 0x1 00:07:53.717 00:07:53.717 Accel Perf Configuration: 00:07:53.717 Workload Type: decompress 00:07:53.717 Transfer size: 4096 bytes 00:07:53.717 Vector count 1 00:07:53.717 Module: software 00:07:53.717 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.717 Queue depth: 32 00:07:53.717 Allocate depth: 32 00:07:53.717 # threads/core: 1 00:07:53.717 Run time: 1 seconds 00:07:53.717 Verify: Yes 00:07:53.717 00:07:53.717 Running for 1 seconds... 00:07:53.717 00:07:53.717 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.717 ------------------------------------------------------------------------------------ 00:07:53.717 0,0 85504/s 157 MiB/s 0 0 00:07:53.717 ==================================================================================== 00:07:53.717 Total 85504/s 334 MiB/s 0 0' 00:07:53.717 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.717 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.717 14:13:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.717 14:13:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.717 14:13:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.717 14:13:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.718 14:13:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.718 14:13:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.718 14:13:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.718 14:13:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.718 14:13:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.718 14:13:59 -- accel/accel.sh@42 -- # jq -r . 00:07:53.718 [2024-12-05 14:13:59.066718] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.718 [2024-12-05 14:13:59.066838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71205 ] 00:07:53.718 [2024-12-05 14:13:59.195272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.718 [2024-12-05 14:13:59.249585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=0x1 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=decompress 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=software 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=32 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=32 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=1 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val=Yes 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:53.718 14:13:59 -- accel/accel.sh@21 -- # val= 00:07:53.718 14:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # IFS=: 00:07:53.718 14:13:59 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@21 -- # val= 00:07:55.094 14:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # IFS=: 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@21 -- # val= 00:07:55.094 14:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # IFS=: 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@21 -- # val= 00:07:55.094 14:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # IFS=: 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@21 -- # val= 00:07:55.094 14:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # IFS=: 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@21 -- # val= 00:07:55.094 14:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # IFS=: 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@21 -- # val= 00:07:55.094 14:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # IFS=: 00:07:55.094 14:14:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.094 14:14:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.094 14:14:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.094 14:14:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.094 00:07:55.094 real 0m2.806s 00:07:55.094 user 0m2.378s 00:07:55.094 sys 0m0.223s 00:07:55.094 14:14:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.094 14:14:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.094 ************************************ 00:07:55.094 END TEST accel_decomp 00:07:55.094 ************************************ 00:07:55.094 14:14:00 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:55.094 14:14:00 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:55.094 14:14:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.094 14:14:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.094 ************************************ 00:07:55.094 START TEST accel_decmop_full 00:07:55.094 ************************************ 00:07:55.094 14:14:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:55.094 14:14:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.094 14:14:00 -- accel/accel.sh@17 -- # local accel_module 00:07:55.094 14:14:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:55.094 14:14:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:55.094 14:14:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.094 14:14:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.094 14:14:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.094 14:14:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.094 14:14:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.094 14:14:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.094 14:14:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.094 14:14:00 -- accel/accel.sh@42 -- # jq -r . 00:07:55.094 [2024-12-05 14:14:00.519477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.094 [2024-12-05 14:14:00.519723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:07:55.094 [2024-12-05 14:14:00.657270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.094 [2024-12-05 14:14:00.714974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.471 14:14:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:56.471 00:07:56.471 SPDK Configuration: 00:07:56.471 Core mask: 0x1 00:07:56.471 00:07:56.471 Accel Perf Configuration: 00:07:56.471 Workload Type: decompress 00:07:56.471 Transfer size: 111250 bytes 00:07:56.471 Vector count 1 00:07:56.471 Module: software 00:07:56.471 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.471 Queue depth: 32 00:07:56.471 Allocate depth: 32 00:07:56.471 # threads/core: 1 00:07:56.471 Run time: 1 seconds 00:07:56.471 Verify: Yes 00:07:56.471 00:07:56.471 Running for 1 seconds... 00:07:56.471 00:07:56.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.471 ------------------------------------------------------------------------------------ 00:07:56.471 0,0 5696/s 235 MiB/s 0 0 00:07:56.471 ==================================================================================== 00:07:56.471 Total 5696/s 604 MiB/s 0 0' 00:07:56.471 14:14:01 -- accel/accel.sh@20 -- # IFS=: 00:07:56.471 14:14:01 -- accel/accel.sh@20 -- # read -r var val 00:07:56.471 14:14:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.471 14:14:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.471 14:14:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.471 14:14:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.471 14:14:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.471 14:14:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.471 14:14:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.471 14:14:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.471 14:14:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.471 14:14:01 -- accel/accel.sh@42 -- # jq -r . 00:07:56.471 [2024-12-05 14:14:01.937273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.471 [2024-12-05 14:14:01.937374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71259 ] 00:07:56.471 [2024-12-05 14:14:02.074423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.730 [2024-12-05 14:14:02.130490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=0x1 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=decompress 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=software 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=32 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=32 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=1 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val=Yes 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:56.731 14:14:02 -- accel/accel.sh@21 -- # val= 00:07:56.731 14:14:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # IFS=: 00:07:56.731 14:14:02 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@21 -- # val= 00:07:58.109 14:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # IFS=: 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@21 -- # val= 00:07:58.109 14:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # IFS=: 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@21 -- # val= 00:07:58.109 14:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # IFS=: 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@21 -- # val= 00:07:58.109 14:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # IFS=: 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@21 -- # val= 00:07:58.109 14:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # IFS=: 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@21 -- # val= 00:07:58.109 14:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # IFS=: 00:07:58.109 14:14:03 -- accel/accel.sh@20 -- # read -r var val 00:07:58.109 14:14:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:58.109 14:14:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:58.109 14:14:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.109 00:07:58.109 real 0m2.839s 00:07:58.109 user 0m2.407s 00:07:58.109 sys 0m0.227s 00:07:58.109 14:14:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.109 14:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.109 ************************************ 00:07:58.109 END TEST accel_decmop_full 00:07:58.109 ************************************ 00:07:58.109 14:14:03 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:58.109 14:14:03 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:58.109 14:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.109 14:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.109 ************************************ 00:07:58.109 START TEST accel_decomp_mcore 00:07:58.109 ************************************ 00:07:58.109 14:14:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:58.109 14:14:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.110 14:14:03 -- accel/accel.sh@17 -- # local accel_module 00:07:58.110 14:14:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:58.110 14:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:58.110 14:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.110 14:14:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.110 14:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.110 14:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.110 14:14:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.110 14:14:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.110 14:14:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.110 14:14:03 -- accel/accel.sh@42 -- # jq -r . 00:07:58.110 [2024-12-05 14:14:03.411909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.110 [2024-12-05 14:14:03.411992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:07:58.110 [2024-12-05 14:14:03.546694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.110 [2024-12-05 14:14:03.618431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.110 [2024-12-05 14:14:03.618582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.110 [2024-12-05 14:14:03.618684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.110 [2024-12-05 14:14:03.618683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.488 14:14:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:59.488 00:07:59.488 SPDK Configuration: 00:07:59.488 Core mask: 0xf 00:07:59.488 00:07:59.488 Accel Perf Configuration: 00:07:59.488 Workload Type: decompress 00:07:59.488 Transfer size: 4096 bytes 00:07:59.488 Vector count 1 00:07:59.488 Module: software 00:07:59.488 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:59.488 Queue depth: 32 00:07:59.488 Allocate depth: 32 00:07:59.488 # threads/core: 1 00:07:59.488 Run time: 1 seconds 00:07:59.488 Verify: Yes 00:07:59.488 00:07:59.488 Running for 1 seconds... 00:07:59.488 00:07:59.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:59.488 ------------------------------------------------------------------------------------ 00:07:59.488 0,0 59424/s 109 MiB/s 0 0 00:07:59.488 3,0 51168/s 94 MiB/s 0 0 00:07:59.488 2,0 57056/s 105 MiB/s 0 0 00:07:59.488 1,0 57536/s 106 MiB/s 0 0 00:07:59.488 ==================================================================================== 00:07:59.488 Total 225184/s 879 MiB/s 0 0' 00:07:59.488 14:14:04 -- accel/accel.sh@20 -- # IFS=: 00:07:59.488 14:14:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.488 14:14:04 -- accel/accel.sh@20 -- # read -r var val 00:07:59.488 14:14:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.488 14:14:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.488 14:14:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.488 14:14:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.488 14:14:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.488 14:14:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.488 14:14:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.488 14:14:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.488 14:14:04 -- accel/accel.sh@42 -- # jq -r . 00:07:59.488 [2024-12-05 14:14:04.934023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.488 [2024-12-05 14:14:04.934287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71316 ] 00:07:59.488 [2024-12-05 14:14:05.071226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.746 [2024-12-05 14:14:05.144492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.746 [2024-12-05 14:14:05.144650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.746 [2024-12-05 14:14:05.145048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.746 [2024-12-05 14:14:05.144778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=0xf 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=decompress 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=software 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=32 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=32 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=1 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.746 14:14:05 -- accel/accel.sh@21 -- # val=Yes 00:07:59.746 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.746 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.747 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.747 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.747 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.747 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.747 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:07:59.747 14:14:05 -- accel/accel.sh@21 -- # val= 00:07:59.747 14:14:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.747 14:14:05 -- accel/accel.sh@20 -- # IFS=: 00:07:59.747 14:14:05 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@21 -- # val= 00:08:01.118 14:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # IFS=: 00:08:01.118 14:14:06 -- accel/accel.sh@20 -- # read -r var val 00:08:01.118 14:14:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:01.118 14:14:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:01.118 14:14:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.118 00:08:01.118 real 0m3.081s 00:08:01.118 user 0m9.781s 00:08:01.118 sys 0m0.318s 00:08:01.118 14:14:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.118 14:14:06 -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 ************************************ 00:08:01.118 END TEST accel_decomp_mcore 00:08:01.118 ************************************ 00:08:01.118 14:14:06 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.118 14:14:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:01.118 14:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.118 14:14:06 -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 ************************************ 00:08:01.118 START TEST accel_decomp_full_mcore 00:08:01.118 ************************************ 00:08:01.118 14:14:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.118 14:14:06 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.118 14:14:06 -- accel/accel.sh@17 -- # local accel_module 00:08:01.118 14:14:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.118 14:14:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.118 14:14:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.118 14:14:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.118 14:14:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.118 14:14:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.118 14:14:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.118 14:14:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.118 14:14:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.118 14:14:06 -- accel/accel.sh@42 -- # jq -r . 00:08:01.118 [2024-12-05 14:14:06.548902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.118 [2024-12-05 14:14:06.549009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71348 ] 00:08:01.118 [2024-12-05 14:14:06.677062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.118 [2024-12-05 14:14:06.748710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.118 [2024-12-05 14:14:06.748850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.118 [2024-12-05 14:14:06.748958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.118 [2024-12-05 14:14:06.749255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.491 14:14:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:02.491 00:08:02.491 SPDK Configuration: 00:08:02.491 Core mask: 0xf 00:08:02.491 00:08:02.491 Accel Perf Configuration: 00:08:02.491 Workload Type: decompress 00:08:02.491 Transfer size: 111250 bytes 00:08:02.491 Vector count 1 00:08:02.491 Module: software 00:08:02.491 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.491 Queue depth: 32 00:08:02.491 Allocate depth: 32 00:08:02.491 # threads/core: 1 00:08:02.491 Run time: 1 seconds 00:08:02.491 Verify: Yes 00:08:02.491 00:08:02.491 Running for 1 seconds... 00:08:02.491 00:08:02.491 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.491 ------------------------------------------------------------------------------------ 00:08:02.491 0,0 5472/s 226 MiB/s 0 0 00:08:02.491 3,0 5440/s 224 MiB/s 0 0 00:08:02.491 2,0 5536/s 228 MiB/s 0 0 00:08:02.491 1,0 5568/s 230 MiB/s 0 0 00:08:02.491 ==================================================================================== 00:08:02.491 Total 22016/s 2335 MiB/s 0 0' 00:08:02.491 14:14:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.491 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.491 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.492 14:14:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.492 14:14:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.492 14:14:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.492 14:14:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.492 14:14:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.492 14:14:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.492 14:14:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.492 14:14:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.492 14:14:08 -- accel/accel.sh@42 -- # jq -r . 00:08:02.492 [2024-12-05 14:14:08.073354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.492 [2024-12-05 14:14:08.073429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ] 00:08:02.750 [2024-12-05 14:14:08.202169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.750 [2024-12-05 14:14:08.269623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.750 [2024-12-05 14:14:08.269763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.750 [2024-12-05 14:14:08.269871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.750 [2024-12-05 14:14:08.270236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=0xf 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=decompress 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=software 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=32 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=32 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=1 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val=Yes 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.750 14:14:08 -- accel/accel.sh@21 -- # val= 00:08:02.750 14:14:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.750 14:14:08 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@21 -- # val= 00:08:04.127 ************************************ 00:08:04.127 END TEST accel_decomp_full_mcore 00:08:04.127 ************************************ 00:08:04.127 14:14:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # IFS=: 00:08:04.127 14:14:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.127 14:14:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.127 14:14:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:04.127 14:14:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.127 00:08:04.127 real 0m3.063s 00:08:04.127 user 0m9.874s 00:08:04.127 sys 0m0.329s 00:08:04.127 14:14:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.127 14:14:09 -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 14:14:09 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:04.127 14:14:09 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:04.127 14:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.127 14:14:09 -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 ************************************ 00:08:04.127 START TEST accel_decomp_mthread 00:08:04.127 ************************************ 00:08:04.127 14:14:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:04.127 14:14:09 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.127 14:14:09 -- accel/accel.sh@17 -- # local accel_module 00:08:04.127 14:14:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:04.127 14:14:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:04.127 14:14:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.127 14:14:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.127 14:14:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.127 14:14:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.127 14:14:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.127 14:14:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.127 14:14:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.127 14:14:09 -- accel/accel.sh@42 -- # jq -r . 00:08:04.127 [2024-12-05 14:14:09.664452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.127 [2024-12-05 14:14:09.664513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71414 ] 00:08:04.387 [2024-12-05 14:14:09.799908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.387 [2024-12-05 14:14:09.878597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.807 14:14:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:05.807 00:08:05.807 SPDK Configuration: 00:08:05.807 Core mask: 0x1 00:08:05.807 00:08:05.807 Accel Perf Configuration: 00:08:05.807 Workload Type: decompress 00:08:05.807 Transfer size: 4096 bytes 00:08:05.807 Vector count 1 00:08:05.807 Module: software 00:08:05.807 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:05.807 Queue depth: 32 00:08:05.807 Allocate depth: 32 00:08:05.807 # threads/core: 2 00:08:05.807 Run time: 1 seconds 00:08:05.807 Verify: Yes 00:08:05.807 00:08:05.807 Running for 1 seconds... 00:08:05.807 00:08:05.807 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:05.807 ------------------------------------------------------------------------------------ 00:08:05.807 0,1 42400/s 78 MiB/s 0 0 00:08:05.807 0,0 42176/s 77 MiB/s 0 0 00:08:05.807 ==================================================================================== 00:08:05.807 Total 84576/s 330 MiB/s 0 0' 00:08:05.807 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:05.807 14:14:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:05.807 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:05.807 14:14:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:05.807 14:14:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.807 14:14:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.807 14:14:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.807 14:14:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.807 14:14:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.807 14:14:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.807 14:14:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.807 14:14:11 -- accel/accel.sh@42 -- # jq -r . 00:08:05.808 [2024-12-05 14:14:11.183608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.808 [2024-12-05 14:14:11.183949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:08:05.808 [2024-12-05 14:14:11.312872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.808 [2024-12-05 14:14:11.379166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.066 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.066 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.066 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.066 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.066 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.066 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=0x1 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=decompress 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=software 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=32 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=32 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=2 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val=Yes 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:06.067 14:14:11 -- accel/accel.sh@21 -- # val= 00:08:06.067 14:14:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # IFS=: 00:08:06.067 14:14:11 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@21 -- # val= 00:08:07.025 14:14:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # IFS=: 00:08:07.025 14:14:12 -- accel/accel.sh@20 -- # read -r var val 00:08:07.025 14:14:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.025 14:14:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:07.025 14:14:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.025 00:08:07.025 real 0m2.971s 00:08:07.025 user 0m2.481s 00:08:07.025 sys 0m0.286s 00:08:07.025 14:14:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.025 ************************************ 00:08:07.025 END TEST accel_decomp_mthread 00:08:07.025 ************************************ 00:08:07.025 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.025 14:14:12 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.025 14:14:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:07.025 14:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.025 14:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.025 ************************************ 00:08:07.025 START TEST accel_deomp_full_mthread 00:08:07.025 ************************************ 00:08:07.025 14:14:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.026 14:14:12 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.026 14:14:12 -- accel/accel.sh@17 -- # local accel_module 00:08:07.026 14:14:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.026 14:14:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.026 14:14:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.026 14:14:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.026 14:14:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.026 14:14:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.026 14:14:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.026 14:14:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.026 14:14:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.026 14:14:12 -- accel/accel.sh@42 -- # jq -r . 00:08:07.284 [2024-12-05 14:14:12.691590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.284 [2024-12-05 14:14:12.691687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71462 ] 00:08:07.284 [2024-12-05 14:14:12.829684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.284 [2024-12-05 14:14:12.891283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.661 14:14:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:08.661 00:08:08.661 SPDK Configuration: 00:08:08.661 Core mask: 0x1 00:08:08.661 00:08:08.661 Accel Perf Configuration: 00:08:08.661 Workload Type: decompress 00:08:08.661 Transfer size: 111250 bytes 00:08:08.661 Vector count 1 00:08:08.661 Module: software 00:08:08.661 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.661 Queue depth: 32 00:08:08.661 Allocate depth: 32 00:08:08.661 # threads/core: 2 00:08:08.661 Run time: 1 seconds 00:08:08.661 Verify: Yes 00:08:08.661 00:08:08.661 Running for 1 seconds... 00:08:08.661 00:08:08.661 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:08.661 ------------------------------------------------------------------------------------ 00:08:08.661 0,1 2880/s 118 MiB/s 0 0 00:08:08.661 0,0 2848/s 117 MiB/s 0 0 00:08:08.661 ==================================================================================== 00:08:08.661 Total 5728/s 607 MiB/s 0 0' 00:08:08.661 14:14:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.661 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.661 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.661 14:14:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.661 14:14:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.661 14:14:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.661 14:14:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.661 14:14:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.661 14:14:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.661 14:14:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.661 14:14:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.662 14:14:14 -- accel/accel.sh@42 -- # jq -r . 00:08:08.662 [2024-12-05 14:14:14.135253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.662 [2024-12-05 14:14:14.135377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:08:08.662 [2024-12-05 14:14:14.281610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.920 [2024-12-05 14:14:14.342679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=0x1 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=decompress 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=software 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@23 -- # accel_module=software 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=32 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=32 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=2 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val=Yes 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.921 14:14:14 -- accel/accel.sh@21 -- # val= 00:08:08.921 14:14:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # IFS=: 00:08:08.921 14:14:14 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@21 -- # val= 00:08:10.302 14:14:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # IFS=: 00:08:10.302 14:14:15 -- accel/accel.sh@20 -- # read -r var val 00:08:10.302 14:14:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.302 14:14:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.302 14:14:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.302 00:08:10.302 real 0m2.894s 00:08:10.302 user 0m2.433s 00:08:10.302 sys 0m0.256s 00:08:10.302 14:14:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.302 ************************************ 00:08:10.302 END TEST accel_deomp_full_mthread 00:08:10.302 ************************************ 00:08:10.302 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.302 14:14:15 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:10.302 14:14:15 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.302 14:14:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:10.302 14:14:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.302 14:14:15 -- accel/accel.sh@129 -- # build_accel_config 00:08:10.302 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.302 14:14:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.302 14:14:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.302 14:14:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.302 14:14:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.302 14:14:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.302 14:14:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.302 14:14:15 -- accel/accel.sh@42 -- # jq -r . 00:08:10.302 ************************************ 00:08:10.302 START TEST accel_dif_functional_tests 00:08:10.302 ************************************ 00:08:10.302 14:14:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.302 [2024-12-05 14:14:15.659910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.302 [2024-12-05 14:14:15.660029] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71517 ] 00:08:10.302 [2024-12-05 14:14:15.794760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.302 [2024-12-05 14:14:15.857134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.302 [2024-12-05 14:14:15.857291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.302 [2024-12-05 14:14:15.857294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.302 00:08:10.302 00:08:10.302 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.302 http://cunit.sourceforge.net/ 00:08:10.302 00:08:10.302 00:08:10.302 Suite: accel_dif 00:08:10.302 Test: verify: DIF generated, GUARD check ...passed 00:08:10.302 Test: verify: DIF generated, APPTAG check ...passed 00:08:10.302 Test: verify: DIF generated, REFTAG check ...passed 00:08:10.302 Test: verify: DIF not generated, GUARD check ...passed 00:08:10.302 Test: verify: DIF not generated, APPTAG check ...passed 00:08:10.302 Test: verify: DIF not generated, REFTAG check ...passed 00:08:10.302 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:10.302 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:10.302 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:10.302 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:10.302 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:10.302 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-05 14:14:15.946305] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.302 [2024-12-05 14:14:15.946401] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.302 [2024-12-05 14:14:15.946436] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.302 [2024-12-05 14:14:15.946463] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.302 [2024-12-05 14:14:15.946487] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.302 [2024-12-05 14:14:15.946511] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.302 [2024-12-05 14:14:15.946563] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:10.302 passed 00:08:10.303 Test: generate copy: DIF generated, GUARD check ...[2024-12-05 14:14:15.946693] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:10.303 passed 00:08:10.303 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:10.303 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:10.303 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:10.303 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:10.303 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:10.303 Test: generate copy: iovecs-len validate ...passed 00:08:10.303 Test: generate copy: buffer alignment validate ...[2024-12-05 14:14:15.946940] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:10.303 passed 00:08:10.303 00:08:10.303 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.303 suites 1 1 n/a 0 0 00:08:10.303 tests 20 20 20 0 0 00:08:10.303 asserts 204 204 204 0 n/a 00:08:10.303 00:08:10.303 Elapsed time = 0.002 seconds 00:08:10.561 00:08:10.561 real 0m0.526s 00:08:10.561 user 0m0.719s 00:08:10.561 sys 0m0.148s 00:08:10.561 ************************************ 00:08:10.561 14:14:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.561 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:10.561 END TEST accel_dif_functional_tests 00:08:10.561 ************************************ 00:08:10.561 00:08:10.561 real 1m2.045s 00:08:10.561 user 1m6.654s 00:08:10.561 sys 0m6.613s 00:08:10.561 14:14:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.561 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:10.561 ************************************ 00:08:10.561 END TEST accel 00:08:10.561 ************************************ 00:08:10.820 14:14:16 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:10.820 14:14:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.820 14:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.820 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:10.820 ************************************ 00:08:10.820 START TEST accel_rpc 00:08:10.820 ************************************ 00:08:10.820 14:14:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:10.820 * Looking for test storage... 00:08:10.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:10.820 14:14:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:10.820 14:14:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:10.820 14:14:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:10.820 14:14:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:10.820 14:14:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:10.820 14:14:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:10.820 14:14:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:10.820 14:14:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:10.820 14:14:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:10.820 14:14:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.820 14:14:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:10.820 14:14:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:10.820 14:14:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:10.820 14:14:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:10.820 14:14:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:10.820 14:14:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:10.820 14:14:16 -- scripts/common.sh@344 -- # : 1 00:08:10.820 14:14:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:10.820 14:14:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.820 14:14:16 -- scripts/common.sh@364 -- # decimal 1 00:08:10.820 14:14:16 -- scripts/common.sh@352 -- # local d=1 00:08:10.820 14:14:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.820 14:14:16 -- scripts/common.sh@354 -- # echo 1 00:08:10.820 14:14:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:10.820 14:14:16 -- scripts/common.sh@365 -- # decimal 2 00:08:10.820 14:14:16 -- scripts/common.sh@352 -- # local d=2 00:08:10.820 14:14:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.820 14:14:16 -- scripts/common.sh@354 -- # echo 2 00:08:10.820 14:14:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:10.820 14:14:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:10.820 14:14:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:10.820 14:14:16 -- scripts/common.sh@367 -- # return 0 00:08:10.820 14:14:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.820 14:14:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.820 --rc genhtml_branch_coverage=1 00:08:10.820 --rc genhtml_function_coverage=1 00:08:10.820 --rc genhtml_legend=1 00:08:10.820 --rc geninfo_all_blocks=1 00:08:10.820 --rc geninfo_unexecuted_blocks=1 00:08:10.820 00:08:10.820 ' 00:08:10.820 14:14:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.820 --rc genhtml_branch_coverage=1 00:08:10.820 --rc genhtml_function_coverage=1 00:08:10.820 --rc genhtml_legend=1 00:08:10.820 --rc geninfo_all_blocks=1 00:08:10.820 --rc geninfo_unexecuted_blocks=1 00:08:10.820 00:08:10.820 ' 00:08:10.820 14:14:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.820 --rc genhtml_branch_coverage=1 00:08:10.820 --rc genhtml_function_coverage=1 00:08:10.820 --rc genhtml_legend=1 00:08:10.820 --rc geninfo_all_blocks=1 00:08:10.820 --rc geninfo_unexecuted_blocks=1 00:08:10.820 00:08:10.820 ' 00:08:10.820 14:14:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.820 --rc genhtml_branch_coverage=1 00:08:10.820 --rc genhtml_function_coverage=1 00:08:10.820 --rc genhtml_legend=1 00:08:10.820 --rc geninfo_all_blocks=1 00:08:10.820 --rc geninfo_unexecuted_blocks=1 00:08:10.820 00:08:10.820 ' 00:08:10.820 14:14:16 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:10.820 14:14:16 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71594 00:08:10.820 14:14:16 -- accel/accel_rpc.sh@15 -- # waitforlisten 71594 00:08:10.820 14:14:16 -- common/autotest_common.sh@829 -- # '[' -z 71594 ']' 00:08:10.820 14:14:16 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:10.820 14:14:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.820 14:14:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.820 14:14:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.820 14:14:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.820 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:10.820 [2024-12-05 14:14:16.462670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.820 [2024-12-05 14:14:16.462746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71594 ] 00:08:11.079 [2024-12-05 14:14:16.593808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.079 [2024-12-05 14:14:16.651075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.079 [2024-12-05 14:14:16.651256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.079 14:14:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.079 14:14:16 -- common/autotest_common.sh@862 -- # return 0 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:11.079 14:14:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.079 14:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.079 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.079 ************************************ 00:08:11.079 START TEST accel_assign_opcode 00:08:11.079 ************************************ 00:08:11.079 14:14:16 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:11.079 14:14:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.079 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.079 [2024-12-05 14:14:16.715672] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:11.079 14:14:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.079 14:14:16 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:11.079 14:14:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.079 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.079 [2024-12-05 14:14:16.723683] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:11.338 14:14:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.338 14:14:16 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:11.338 14:14:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.338 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.338 14:14:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.338 14:14:16 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:11.338 14:14:16 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:11.338 14:14:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.338 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.338 14:14:16 -- accel/accel_rpc.sh@42 -- # grep software 00:08:11.338 14:14:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 software 00:08:11.596 00:08:11.596 real 0m0.286s 00:08:11.596 user 0m0.053s 00:08:11.596 sys 0m0.013s 00:08:11.596 14:14:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.596 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 ************************************ 00:08:11.596 END TEST accel_assign_opcode 00:08:11.596 ************************************ 00:08:11.596 14:14:17 -- accel/accel_rpc.sh@55 -- # killprocess 71594 00:08:11.596 14:14:17 -- common/autotest_common.sh@936 -- # '[' -z 71594 ']' 00:08:11.596 14:14:17 -- common/autotest_common.sh@940 -- # kill -0 71594 00:08:11.596 14:14:17 -- common/autotest_common.sh@941 -- # uname 00:08:11.597 14:14:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.597 14:14:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71594 00:08:11.597 14:14:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:11.597 killing process with pid 71594 00:08:11.597 14:14:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:11.597 14:14:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71594' 00:08:11.597 14:14:17 -- common/autotest_common.sh@955 -- # kill 71594 00:08:11.597 14:14:17 -- common/autotest_common.sh@960 -- # wait 71594 00:08:11.855 00:08:11.855 real 0m1.204s 00:08:11.855 user 0m1.095s 00:08:11.855 sys 0m0.423s 00:08:11.855 14:14:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.855 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:11.855 ************************************ 00:08:11.855 END TEST accel_rpc 00:08:11.855 ************************************ 00:08:11.855 14:14:17 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:11.855 14:14:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.855 14:14:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.855 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:11.855 ************************************ 00:08:11.855 START TEST app_cmdline 00:08:11.855 ************************************ 00:08:11.855 14:14:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:12.115 * Looking for test storage... 00:08:12.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:12.115 14:14:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.115 14:14:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:12.115 14:14:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.115 14:14:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:12.115 14:14:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:12.115 14:14:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:12.115 14:14:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:12.115 14:14:17 -- scripts/common.sh@335 -- # IFS=.-: 00:08:12.115 14:14:17 -- scripts/common.sh@335 -- # read -ra ver1 00:08:12.115 14:14:17 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.115 14:14:17 -- scripts/common.sh@336 -- # read -ra ver2 00:08:12.115 14:14:17 -- scripts/common.sh@337 -- # local 'op=<' 00:08:12.115 14:14:17 -- scripts/common.sh@339 -- # ver1_l=2 00:08:12.115 14:14:17 -- scripts/common.sh@340 -- # ver2_l=1 00:08:12.115 14:14:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:12.115 14:14:17 -- scripts/common.sh@343 -- # case "$op" in 00:08:12.115 14:14:17 -- scripts/common.sh@344 -- # : 1 00:08:12.115 14:14:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:12.115 14:14:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.115 14:14:17 -- scripts/common.sh@364 -- # decimal 1 00:08:12.115 14:14:17 -- scripts/common.sh@352 -- # local d=1 00:08:12.115 14:14:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.115 14:14:17 -- scripts/common.sh@354 -- # echo 1 00:08:12.115 14:14:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:12.115 14:14:17 -- scripts/common.sh@365 -- # decimal 2 00:08:12.115 14:14:17 -- scripts/common.sh@352 -- # local d=2 00:08:12.115 14:14:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.115 14:14:17 -- scripts/common.sh@354 -- # echo 2 00:08:12.115 14:14:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:12.115 14:14:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:12.115 14:14:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:12.115 14:14:17 -- scripts/common.sh@367 -- # return 0 00:08:12.115 14:14:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.115 14:14:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:12.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.115 --rc genhtml_branch_coverage=1 00:08:12.115 --rc genhtml_function_coverage=1 00:08:12.115 --rc genhtml_legend=1 00:08:12.115 --rc geninfo_all_blocks=1 00:08:12.115 --rc geninfo_unexecuted_blocks=1 00:08:12.115 00:08:12.115 ' 00:08:12.115 14:14:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:12.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.115 --rc genhtml_branch_coverage=1 00:08:12.115 --rc genhtml_function_coverage=1 00:08:12.115 --rc genhtml_legend=1 00:08:12.115 --rc geninfo_all_blocks=1 00:08:12.115 --rc geninfo_unexecuted_blocks=1 00:08:12.115 00:08:12.115 ' 00:08:12.115 14:14:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:12.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.115 --rc genhtml_branch_coverage=1 00:08:12.115 --rc genhtml_function_coverage=1 00:08:12.115 --rc genhtml_legend=1 00:08:12.115 --rc geninfo_all_blocks=1 00:08:12.115 --rc geninfo_unexecuted_blocks=1 00:08:12.115 00:08:12.116 ' 00:08:12.116 14:14:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:12.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.116 --rc genhtml_branch_coverage=1 00:08:12.116 --rc genhtml_function_coverage=1 00:08:12.116 --rc genhtml_legend=1 00:08:12.116 --rc geninfo_all_blocks=1 00:08:12.116 --rc geninfo_unexecuted_blocks=1 00:08:12.116 00:08:12.116 ' 00:08:12.116 14:14:17 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:12.116 14:14:17 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71693 00:08:12.116 14:14:17 -- app/cmdline.sh@18 -- # waitforlisten 71693 00:08:12.116 14:14:17 -- common/autotest_common.sh@829 -- # '[' -z 71693 ']' 00:08:12.116 14:14:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.116 14:14:17 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:12.116 14:14:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.116 14:14:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.116 14:14:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.116 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.116 [2024-12-05 14:14:17.715698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.116 [2024-12-05 14:14:17.715829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71693 ] 00:08:12.374 [2024-12-05 14:14:17.854476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.374 [2024-12-05 14:14:17.911340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.374 [2024-12-05 14:14:17.911524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.310 14:14:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.310 14:14:18 -- common/autotest_common.sh@862 -- # return 0 00:08:13.310 14:14:18 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:13.310 { 00:08:13.310 "fields": { 00:08:13.310 "commit": "c13c99a5e", 00:08:13.310 "major": 24, 00:08:13.310 "minor": 1, 00:08:13.310 "patch": 1, 00:08:13.310 "suffix": "-pre" 00:08:13.310 }, 00:08:13.310 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:08:13.310 } 00:08:13.310 14:14:18 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:13.310 14:14:18 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:13.310 14:14:18 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:13.310 14:14:18 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:13.310 14:14:18 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:13.310 14:14:18 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:13.310 14:14:18 -- app/cmdline.sh@26 -- # sort 00:08:13.310 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.310 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.310 14:14:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.569 14:14:18 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:13.569 14:14:18 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:13.569 14:14:18 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.569 14:14:18 -- common/autotest_common.sh@650 -- # local es=0 00:08:13.569 14:14:18 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.569 14:14:18 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 14:14:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.569 14:14:18 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 14:14:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.569 14:14:18 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 14:14:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.569 14:14:18 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 14:14:18 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:13.569 14:14:18 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.569 2024/12/05 14:14:19 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:08:13.569 request: 00:08:13.569 { 00:08:13.569 "method": "env_dpdk_get_mem_stats", 00:08:13.569 "params": {} 00:08:13.569 } 00:08:13.569 Got JSON-RPC error response 00:08:13.569 GoRPCClient: error on JSON-RPC call 00:08:13.827 14:14:19 -- common/autotest_common.sh@653 -- # es=1 00:08:13.827 14:14:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.827 14:14:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.827 14:14:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.827 14:14:19 -- app/cmdline.sh@1 -- # killprocess 71693 00:08:13.827 14:14:19 -- common/autotest_common.sh@936 -- # '[' -z 71693 ']' 00:08:13.827 14:14:19 -- common/autotest_common.sh@940 -- # kill -0 71693 00:08:13.827 14:14:19 -- common/autotest_common.sh@941 -- # uname 00:08:13.827 14:14:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:13.827 14:14:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71693 00:08:13.827 14:14:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:13.827 14:14:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:13.827 killing process with pid 71693 00:08:13.828 14:14:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71693' 00:08:13.828 14:14:19 -- common/autotest_common.sh@955 -- # kill 71693 00:08:13.828 14:14:19 -- common/autotest_common.sh@960 -- # wait 71693 00:08:14.087 ************************************ 00:08:14.087 END TEST app_cmdline 00:08:14.087 ************************************ 00:08:14.087 00:08:14.087 real 0m2.120s 00:08:14.087 user 0m2.579s 00:08:14.087 sys 0m0.537s 00:08:14.087 14:14:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.087 14:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.087 14:14:19 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:14.087 14:14:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.087 14:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.087 14:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.087 ************************************ 00:08:14.087 START TEST version 00:08:14.087 ************************************ 00:08:14.087 14:14:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:14.346 * Looking for test storage... 00:08:14.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:14.346 14:14:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.346 14:14:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.346 14:14:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.346 14:14:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.346 14:14:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.346 14:14:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.346 14:14:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.346 14:14:19 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.346 14:14:19 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.346 14:14:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.346 14:14:19 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.346 14:14:19 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.346 14:14:19 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.346 14:14:19 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.346 14:14:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.346 14:14:19 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.346 14:14:19 -- scripts/common.sh@344 -- # : 1 00:08:14.346 14:14:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.346 14:14:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.346 14:14:19 -- scripts/common.sh@364 -- # decimal 1 00:08:14.346 14:14:19 -- scripts/common.sh@352 -- # local d=1 00:08:14.346 14:14:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.346 14:14:19 -- scripts/common.sh@354 -- # echo 1 00:08:14.346 14:14:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.346 14:14:19 -- scripts/common.sh@365 -- # decimal 2 00:08:14.346 14:14:19 -- scripts/common.sh@352 -- # local d=2 00:08:14.346 14:14:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.346 14:14:19 -- scripts/common.sh@354 -- # echo 2 00:08:14.346 14:14:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.346 14:14:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.346 14:14:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.346 14:14:19 -- scripts/common.sh@367 -- # return 0 00:08:14.346 14:14:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.346 14:14:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 14:14:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 14:14:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 14:14:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.347 --rc geninfo_unexecuted_blocks=1 00:08:14.347 00:08:14.347 ' 00:08:14.347 14:14:19 -- app/version.sh@17 -- # get_header_version major 00:08:14.347 14:14:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.347 14:14:19 -- app/version.sh@14 -- # tr -d '"' 00:08:14.347 14:14:19 -- app/version.sh@14 -- # cut -f2 00:08:14.347 14:14:19 -- app/version.sh@17 -- # major=24 00:08:14.347 14:14:19 -- app/version.sh@18 -- # get_header_version minor 00:08:14.347 14:14:19 -- app/version.sh@14 -- # cut -f2 00:08:14.347 14:14:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.347 14:14:19 -- app/version.sh@14 -- # tr -d '"' 00:08:14.347 14:14:19 -- app/version.sh@18 -- # minor=1 00:08:14.347 14:14:19 -- app/version.sh@19 -- # get_header_version patch 00:08:14.347 14:14:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.347 14:14:19 -- app/version.sh@14 -- # cut -f2 00:08:14.347 14:14:19 -- app/version.sh@14 -- # tr -d '"' 00:08:14.347 14:14:19 -- app/version.sh@19 -- # patch=1 00:08:14.347 14:14:19 -- app/version.sh@20 -- # get_header_version suffix 00:08:14.347 14:14:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.347 14:14:19 -- app/version.sh@14 -- # cut -f2 00:08:14.347 14:14:19 -- app/version.sh@14 -- # tr -d '"' 00:08:14.347 14:14:19 -- app/version.sh@20 -- # suffix=-pre 00:08:14.347 14:14:19 -- app/version.sh@22 -- # version=24.1 00:08:14.347 14:14:19 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:14.347 14:14:19 -- app/version.sh@25 -- # version=24.1.1 00:08:14.347 14:14:19 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:14.347 14:14:19 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.347 14:14:19 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:14.347 14:14:19 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:14.347 14:14:19 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:14.347 00:08:14.347 real 0m0.257s 00:08:14.347 user 0m0.174s 00:08:14.347 sys 0m0.122s 00:08:14.347 14:14:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.347 ************************************ 00:08:14.347 END TEST version 00:08:14.347 14:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.347 ************************************ 00:08:14.347 14:14:19 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:14.347 14:14:19 -- spdk/autotest.sh@191 -- # uname -s 00:08:14.347 14:14:19 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:14.347 14:14:19 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:14.347 14:14:19 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:14.347 14:14:19 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:14.347 14:14:19 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:14.347 14:14:19 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:14.347 14:14:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.347 14:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:14.606 14:14:20 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:14.606 14:14:20 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:14.606 14:14:20 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:14.606 14:14:20 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:14.606 14:14:20 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:14.606 14:14:20 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:14.606 14:14:20 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:14.606 14:14:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.606 14:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.606 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.606 ************************************ 00:08:14.606 START TEST nvmf_tcp 00:08:14.606 ************************************ 00:08:14.606 14:14:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:14.606 * Looking for test storage... 00:08:14.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:14.606 14:14:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.606 14:14:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.606 14:14:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.606 14:14:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.606 14:14:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.606 14:14:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.606 14:14:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.606 14:14:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.606 14:14:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.606 14:14:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.606 14:14:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.606 14:14:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.606 14:14:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.606 14:14:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.606 14:14:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.606 14:14:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.606 14:14:20 -- scripts/common.sh@344 -- # : 1 00:08:14.606 14:14:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.606 14:14:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.606 14:14:20 -- scripts/common.sh@364 -- # decimal 1 00:08:14.606 14:14:20 -- scripts/common.sh@352 -- # local d=1 00:08:14.606 14:14:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.606 14:14:20 -- scripts/common.sh@354 -- # echo 1 00:08:14.606 14:14:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.606 14:14:20 -- scripts/common.sh@365 -- # decimal 2 00:08:14.606 14:14:20 -- scripts/common.sh@352 -- # local d=2 00:08:14.606 14:14:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.606 14:14:20 -- scripts/common.sh@354 -- # echo 2 00:08:14.606 14:14:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.606 14:14:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.606 14:14:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.606 14:14:20 -- scripts/common.sh@367 -- # return 0 00:08:14.606 14:14:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.606 14:14:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.606 --rc genhtml_branch_coverage=1 00:08:14.606 --rc genhtml_function_coverage=1 00:08:14.606 --rc genhtml_legend=1 00:08:14.606 --rc geninfo_all_blocks=1 00:08:14.606 --rc geninfo_unexecuted_blocks=1 00:08:14.606 00:08:14.606 ' 00:08:14.606 14:14:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.606 --rc genhtml_branch_coverage=1 00:08:14.606 --rc genhtml_function_coverage=1 00:08:14.606 --rc genhtml_legend=1 00:08:14.606 --rc geninfo_all_blocks=1 00:08:14.606 --rc geninfo_unexecuted_blocks=1 00:08:14.606 00:08:14.606 ' 00:08:14.606 14:14:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.606 --rc genhtml_branch_coverage=1 00:08:14.606 --rc genhtml_function_coverage=1 00:08:14.606 --rc genhtml_legend=1 00:08:14.606 --rc geninfo_all_blocks=1 00:08:14.606 --rc geninfo_unexecuted_blocks=1 00:08:14.606 00:08:14.606 ' 00:08:14.606 14:14:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.606 --rc genhtml_branch_coverage=1 00:08:14.607 --rc genhtml_function_coverage=1 00:08:14.607 --rc genhtml_legend=1 00:08:14.607 --rc geninfo_all_blocks=1 00:08:14.607 --rc geninfo_unexecuted_blocks=1 00:08:14.607 00:08:14.607 ' 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.607 14:14:20 -- nvmf/common.sh@7 -- # uname -s 00:08:14.607 14:14:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.607 14:14:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.607 14:14:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.607 14:14:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.607 14:14:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.607 14:14:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.607 14:14:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.607 14:14:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.607 14:14:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.607 14:14:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.607 14:14:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:14.607 14:14:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:14.607 14:14:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.607 14:14:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.607 14:14:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.607 14:14:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.607 14:14:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.607 14:14:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.607 14:14:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.607 14:14:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.607 14:14:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.607 14:14:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.607 14:14:20 -- paths/export.sh@5 -- # export PATH 00:08:14.607 14:14:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.607 14:14:20 -- nvmf/common.sh@46 -- # : 0 00:08:14.607 14:14:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.607 14:14:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.607 14:14:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.607 14:14:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.607 14:14:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.607 14:14:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.607 14:14:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.607 14:14:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:14.607 14:14:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.607 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:14.607 14:14:20 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:14.607 14:14:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.607 14:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.607 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.607 ************************************ 00:08:14.607 START TEST nvmf_example 00:08:14.607 ************************************ 00:08:14.607 14:14:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:14.866 * Looking for test storage... 00:08:14.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.866 14:14:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.866 14:14:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.866 14:14:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.867 14:14:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.867 14:14:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.867 14:14:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.867 14:14:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.867 14:14:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.867 14:14:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.867 14:14:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.867 14:14:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.867 14:14:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.867 14:14:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.867 14:14:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.867 14:14:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.867 14:14:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.867 14:14:20 -- scripts/common.sh@344 -- # : 1 00:08:14.867 14:14:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.867 14:14:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.867 14:14:20 -- scripts/common.sh@364 -- # decimal 1 00:08:14.867 14:14:20 -- scripts/common.sh@352 -- # local d=1 00:08:14.867 14:14:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.867 14:14:20 -- scripts/common.sh@354 -- # echo 1 00:08:14.867 14:14:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.867 14:14:20 -- scripts/common.sh@365 -- # decimal 2 00:08:14.867 14:14:20 -- scripts/common.sh@352 -- # local d=2 00:08:14.867 14:14:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.867 14:14:20 -- scripts/common.sh@354 -- # echo 2 00:08:14.867 14:14:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.867 14:14:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.867 14:14:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.867 14:14:20 -- scripts/common.sh@367 -- # return 0 00:08:14.867 14:14:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.867 14:14:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.867 --rc genhtml_branch_coverage=1 00:08:14.867 --rc genhtml_function_coverage=1 00:08:14.867 --rc genhtml_legend=1 00:08:14.867 --rc geninfo_all_blocks=1 00:08:14.867 --rc geninfo_unexecuted_blocks=1 00:08:14.867 00:08:14.867 ' 00:08:14.867 14:14:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.867 --rc genhtml_branch_coverage=1 00:08:14.867 --rc genhtml_function_coverage=1 00:08:14.867 --rc genhtml_legend=1 00:08:14.867 --rc geninfo_all_blocks=1 00:08:14.867 --rc geninfo_unexecuted_blocks=1 00:08:14.867 00:08:14.867 ' 00:08:14.867 14:14:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.867 --rc genhtml_branch_coverage=1 00:08:14.867 --rc genhtml_function_coverage=1 00:08:14.867 --rc genhtml_legend=1 00:08:14.867 --rc geninfo_all_blocks=1 00:08:14.867 --rc geninfo_unexecuted_blocks=1 00:08:14.867 00:08:14.867 ' 00:08:14.867 14:14:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.867 --rc genhtml_branch_coverage=1 00:08:14.867 --rc genhtml_function_coverage=1 00:08:14.867 --rc genhtml_legend=1 00:08:14.867 --rc geninfo_all_blocks=1 00:08:14.867 --rc geninfo_unexecuted_blocks=1 00:08:14.867 00:08:14.867 ' 00:08:14.867 14:14:20 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.867 14:14:20 -- nvmf/common.sh@7 -- # uname -s 00:08:14.867 14:14:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.867 14:14:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.867 14:14:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.867 14:14:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.867 14:14:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.867 14:14:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.867 14:14:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.867 14:14:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.867 14:14:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.867 14:14:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.867 14:14:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:14.867 14:14:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:14.867 14:14:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.867 14:14:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.867 14:14:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.867 14:14:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.867 14:14:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.867 14:14:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.867 14:14:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.867 14:14:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.867 14:14:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.867 14:14:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.867 14:14:20 -- paths/export.sh@5 -- # export PATH 00:08:14.867 14:14:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.867 14:14:20 -- nvmf/common.sh@46 -- # : 0 00:08:14.867 14:14:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.867 14:14:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.867 14:14:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.867 14:14:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.867 14:14:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.867 14:14:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.867 14:14:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.867 14:14:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.867 14:14:20 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:14.867 14:14:20 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:14.867 14:14:20 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:14.867 14:14:20 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:14.867 14:14:20 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:14.867 14:14:20 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:14.867 14:14:20 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:14.867 14:14:20 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:14.867 14:14:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.867 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.867 14:14:20 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:14.867 14:14:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:14.867 14:14:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.867 14:14:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:14.867 14:14:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:14.867 14:14:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:14.867 14:14:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.867 14:14:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.867 14:14:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.867 14:14:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:14.867 14:14:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:14.867 14:14:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:14.867 14:14:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:14.867 14:14:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:14.867 14:14:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:14.867 14:14:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.867 14:14:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.867 14:14:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.867 14:14:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:14.867 14:14:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.867 14:14:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.867 14:14:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.867 14:14:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.867 14:14:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.867 14:14:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.867 14:14:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.867 14:14:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.867 14:14:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:14.867 Cannot find device "nvmf_init_br" 00:08:14.867 14:14:20 -- nvmf/common.sh@153 -- # true 00:08:14.867 14:14:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:14.867 Cannot find device "nvmf_tgt_br" 00:08:14.867 14:14:20 -- nvmf/common.sh@154 -- # true 00:08:14.867 14:14:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.868 Cannot find device "nvmf_tgt_br2" 00:08:14.868 14:14:20 -- nvmf/common.sh@155 -- # true 00:08:14.868 14:14:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:14.868 Cannot find device "nvmf_init_br" 00:08:14.868 14:14:20 -- nvmf/common.sh@156 -- # true 00:08:14.868 14:14:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:14.868 Cannot find device "nvmf_tgt_br" 00:08:14.868 14:14:20 -- nvmf/common.sh@157 -- # true 00:08:14.868 14:14:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:15.126 Cannot find device "nvmf_tgt_br2" 00:08:15.126 14:14:20 -- nvmf/common.sh@158 -- # true 00:08:15.126 14:14:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:15.126 Cannot find device "nvmf_br" 00:08:15.126 14:14:20 -- nvmf/common.sh@159 -- # true 00:08:15.126 14:14:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:15.126 Cannot find device "nvmf_init_if" 00:08:15.126 14:14:20 -- nvmf/common.sh@160 -- # true 00:08:15.126 14:14:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.126 14:14:20 -- nvmf/common.sh@161 -- # true 00:08:15.126 14:14:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.126 14:14:20 -- nvmf/common.sh@162 -- # true 00:08:15.126 14:14:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.126 14:14:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.126 14:14:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.126 14:14:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.126 14:14:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.126 14:14:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.126 14:14:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.126 14:14:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.126 14:14:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.126 14:14:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:15.126 14:14:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:15.126 14:14:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:15.126 14:14:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:15.126 14:14:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.126 14:14:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.126 14:14:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.126 14:14:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:15.126 14:14:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:15.126 14:14:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.127 14:14:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.385 14:14:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.385 14:14:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.385 14:14:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.385 14:14:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:15.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:15.385 00:08:15.385 --- 10.0.0.2 ping statistics --- 00:08:15.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.385 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:15.385 14:14:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:15.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:15.385 00:08:15.385 --- 10.0.0.3 ping statistics --- 00:08:15.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.385 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:15.385 14:14:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:15.385 00:08:15.385 --- 10.0.0.1 ping statistics --- 00:08:15.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.385 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:15.385 14:14:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.385 14:14:20 -- nvmf/common.sh@421 -- # return 0 00:08:15.385 14:14:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.385 14:14:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.385 14:14:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:15.385 14:14:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:15.386 14:14:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.386 14:14:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:15.386 14:14:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:15.386 14:14:20 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:15.386 14:14:20 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:15.386 14:14:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.386 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:15.386 14:14:20 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:15.386 14:14:20 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:15.386 14:14:20 -- target/nvmf_example.sh@34 -- # nvmfpid=72074 00:08:15.386 14:14:20 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.386 14:14:20 -- target/nvmf_example.sh@36 -- # waitforlisten 72074 00:08:15.386 14:14:20 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:15.386 14:14:20 -- common/autotest_common.sh@829 -- # '[' -z 72074 ']' 00:08:15.386 14:14:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.386 14:14:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.386 14:14:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.386 14:14:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.386 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.760 14:14:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.760 14:14:21 -- common/autotest_common.sh@862 -- # return 0 00:08:16.760 14:14:21 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:16.760 14:14:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.760 14:14:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.760 14:14:22 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.760 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.760 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.760 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.760 14:14:22 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.761 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.761 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.761 14:14:22 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:16.761 14:14:22 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.761 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.761 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.761 14:14:22 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:16.761 14:14:22 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.761 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.761 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.761 14:14:22 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.761 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.761 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.761 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.761 14:14:22 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:16.761 14:14:22 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:26.737 Initializing NVMe Controllers 00:08:26.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:26.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:26.737 Initialization complete. Launching workers. 00:08:26.737 ======================================================== 00:08:26.737 Latency(us) 00:08:26.737 Device Information : IOPS MiB/s Average min max 00:08:26.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16670.70 65.12 3840.25 557.36 22956.78 00:08:26.737 ======================================================== 00:08:26.737 Total : 16670.70 65.12 3840.25 557.36 22956.78 00:08:26.737 00:08:26.737 14:14:32 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:26.737 14:14:32 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:26.737 14:14:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:26.737 14:14:32 -- nvmf/common.sh@116 -- # sync 00:08:26.737 14:14:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:26.737 14:14:32 -- nvmf/common.sh@119 -- # set +e 00:08:26.737 14:14:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:26.737 14:14:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:26.997 rmmod nvme_tcp 00:08:26.997 rmmod nvme_fabrics 00:08:26.997 rmmod nvme_keyring 00:08:26.997 14:14:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:26.997 14:14:32 -- nvmf/common.sh@123 -- # set -e 00:08:26.997 14:14:32 -- nvmf/common.sh@124 -- # return 0 00:08:26.997 14:14:32 -- nvmf/common.sh@477 -- # '[' -n 72074 ']' 00:08:26.997 14:14:32 -- nvmf/common.sh@478 -- # killprocess 72074 00:08:26.997 14:14:32 -- common/autotest_common.sh@936 -- # '[' -z 72074 ']' 00:08:26.997 14:14:32 -- common/autotest_common.sh@940 -- # kill -0 72074 00:08:26.997 14:14:32 -- common/autotest_common.sh@941 -- # uname 00:08:26.997 14:14:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:26.997 14:14:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72074 00:08:26.997 14:14:32 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:26.997 14:14:32 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:26.997 killing process with pid 72074 00:08:26.997 14:14:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72074' 00:08:26.997 14:14:32 -- common/autotest_common.sh@955 -- # kill 72074 00:08:26.997 14:14:32 -- common/autotest_common.sh@960 -- # wait 72074 00:08:27.256 nvmf threads initialize successfully 00:08:27.256 bdev subsystem init successfully 00:08:27.256 created a nvmf target service 00:08:27.256 create targets's poll groups done 00:08:27.256 all subsystems of target started 00:08:27.256 nvmf target is running 00:08:27.256 all subsystems of target stopped 00:08:27.256 destroy targets's poll groups done 00:08:27.256 destroyed the nvmf target service 00:08:27.256 bdev subsystem finish successfully 00:08:27.256 nvmf threads destroy successfully 00:08:27.256 14:14:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:27.256 14:14:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:27.256 14:14:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:27.256 14:14:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.256 14:14:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:27.256 14:14:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.256 14:14:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.256 14:14:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.256 14:14:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:27.256 14:14:32 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:27.256 14:14:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.256 14:14:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.256 00:08:27.256 real 0m12.515s 00:08:27.256 user 0m44.820s 00:08:27.256 sys 0m1.988s 00:08:27.256 14:14:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.256 ************************************ 00:08:27.256 END TEST nvmf_example 00:08:27.256 14:14:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.256 ************************************ 00:08:27.256 14:14:32 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:27.256 14:14:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:27.256 14:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.256 14:14:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.256 ************************************ 00:08:27.256 START TEST nvmf_filesystem 00:08:27.256 ************************************ 00:08:27.256 14:14:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:27.256 * Looking for test storage... 00:08:27.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.256 14:14:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.256 14:14:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.256 14:14:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.517 14:14:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.517 14:14:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.517 14:14:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.517 14:14:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.517 14:14:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.517 14:14:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.517 14:14:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.517 14:14:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.517 14:14:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.517 14:14:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.517 14:14:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.517 14:14:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.517 14:14:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.517 14:14:32 -- scripts/common.sh@344 -- # : 1 00:08:27.517 14:14:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.517 14:14:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.517 14:14:32 -- scripts/common.sh@364 -- # decimal 1 00:08:27.517 14:14:32 -- scripts/common.sh@352 -- # local d=1 00:08:27.517 14:14:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.517 14:14:32 -- scripts/common.sh@354 -- # echo 1 00:08:27.517 14:14:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.517 14:14:32 -- scripts/common.sh@365 -- # decimal 2 00:08:27.517 14:14:32 -- scripts/common.sh@352 -- # local d=2 00:08:27.517 14:14:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.517 14:14:32 -- scripts/common.sh@354 -- # echo 2 00:08:27.518 14:14:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.518 14:14:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.518 14:14:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.518 14:14:32 -- scripts/common.sh@367 -- # return 0 00:08:27.518 14:14:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.518 14:14:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.518 --rc genhtml_branch_coverage=1 00:08:27.518 --rc genhtml_function_coverage=1 00:08:27.518 --rc genhtml_legend=1 00:08:27.518 --rc geninfo_all_blocks=1 00:08:27.518 --rc geninfo_unexecuted_blocks=1 00:08:27.518 00:08:27.518 ' 00:08:27.518 14:14:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.518 --rc genhtml_branch_coverage=1 00:08:27.518 --rc genhtml_function_coverage=1 00:08:27.518 --rc genhtml_legend=1 00:08:27.518 --rc geninfo_all_blocks=1 00:08:27.518 --rc geninfo_unexecuted_blocks=1 00:08:27.518 00:08:27.518 ' 00:08:27.518 14:14:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.518 --rc genhtml_branch_coverage=1 00:08:27.518 --rc genhtml_function_coverage=1 00:08:27.518 --rc genhtml_legend=1 00:08:27.518 --rc geninfo_all_blocks=1 00:08:27.518 --rc geninfo_unexecuted_blocks=1 00:08:27.518 00:08:27.518 ' 00:08:27.518 14:14:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.518 --rc genhtml_branch_coverage=1 00:08:27.518 --rc genhtml_function_coverage=1 00:08:27.518 --rc genhtml_legend=1 00:08:27.518 --rc geninfo_all_blocks=1 00:08:27.518 --rc geninfo_unexecuted_blocks=1 00:08:27.518 00:08:27.518 ' 00:08:27.518 14:14:32 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:27.518 14:14:32 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:27.518 14:14:32 -- common/autotest_common.sh@34 -- # set -e 00:08:27.518 14:14:32 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:27.518 14:14:32 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:27.518 14:14:32 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:27.518 14:14:32 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:27.518 14:14:32 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:27.518 14:14:32 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:27.518 14:14:32 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:27.518 14:14:32 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:27.518 14:14:32 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:27.518 14:14:32 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:27.518 14:14:32 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:27.518 14:14:32 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:27.518 14:14:32 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:27.518 14:14:32 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:27.518 14:14:32 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:27.518 14:14:32 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:27.518 14:14:32 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:27.518 14:14:32 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:27.518 14:14:32 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:27.518 14:14:32 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:27.518 14:14:32 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:27.518 14:14:32 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:27.518 14:14:32 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:27.518 14:14:32 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:27.518 14:14:32 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:27.518 14:14:32 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:27.518 14:14:32 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:27.518 14:14:32 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:27.518 14:14:32 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:27.518 14:14:32 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:27.518 14:14:32 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:27.518 14:14:32 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:27.518 14:14:32 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:27.518 14:14:32 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:27.518 14:14:32 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:27.518 14:14:32 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:27.518 14:14:32 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:27.518 14:14:32 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:27.518 14:14:32 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:27.518 14:14:32 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:27.518 14:14:32 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:27.518 14:14:32 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:27.518 14:14:32 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:27.518 14:14:32 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:27.518 14:14:32 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:27.518 14:14:32 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:27.518 14:14:32 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:27.518 14:14:32 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:27.518 14:14:32 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:27.518 14:14:32 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:27.518 14:14:32 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:27.518 14:14:32 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:27.518 14:14:32 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:27.518 14:14:32 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:27.518 14:14:32 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:27.518 14:14:32 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:27.518 14:14:32 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:27.518 14:14:32 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:27.518 14:14:32 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:27.518 14:14:32 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:27.518 14:14:32 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:27.518 14:14:32 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:27.518 14:14:32 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:27.518 14:14:32 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:27.518 14:14:32 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.518 14:14:32 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:27.518 14:14:32 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:27.518 14:14:32 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:27.518 14:14:32 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:27.518 14:14:32 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:27.518 14:14:32 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:27.518 14:14:32 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:27.518 14:14:32 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:27.518 14:14:32 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:27.518 14:14:32 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:27.518 14:14:32 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:27.518 14:14:32 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:27.518 14:14:32 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:27.518 14:14:32 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:27.518 14:14:32 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:27.518 14:14:32 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:27.518 14:14:32 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:27.518 14:14:32 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:27.518 14:14:32 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:27.518 14:14:32 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:27.518 14:14:33 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:27.518 14:14:33 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:27.518 14:14:33 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:27.518 14:14:33 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:27.518 14:14:33 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:27.518 14:14:33 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:27.518 14:14:33 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:27.518 14:14:33 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:27.518 14:14:33 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:27.518 14:14:33 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:27.518 14:14:33 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:27.518 14:14:33 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:27.518 14:14:33 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:27.518 14:14:33 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:27.518 #define SPDK_CONFIG_H 00:08:27.518 #define SPDK_CONFIG_APPS 1 00:08:27.518 #define SPDK_CONFIG_ARCH native 00:08:27.518 #undef SPDK_CONFIG_ASAN 00:08:27.518 #define SPDK_CONFIG_AVAHI 1 00:08:27.518 #undef SPDK_CONFIG_CET 00:08:27.518 #define SPDK_CONFIG_COVERAGE 1 00:08:27.518 #define SPDK_CONFIG_CROSS_PREFIX 00:08:27.518 #undef SPDK_CONFIG_CRYPTO 00:08:27.518 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:27.518 #undef SPDK_CONFIG_CUSTOMOCF 00:08:27.518 #undef SPDK_CONFIG_DAOS 00:08:27.518 #define SPDK_CONFIG_DAOS_DIR 00:08:27.518 #define SPDK_CONFIG_DEBUG 1 00:08:27.518 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:27.518 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:27.518 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:27.518 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.518 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:27.518 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:27.518 #define SPDK_CONFIG_EXAMPLES 1 00:08:27.518 #undef SPDK_CONFIG_FC 00:08:27.518 #define SPDK_CONFIG_FC_PATH 00:08:27.518 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:27.518 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:27.518 #undef SPDK_CONFIG_FUSE 00:08:27.518 #undef SPDK_CONFIG_FUZZER 00:08:27.518 #define SPDK_CONFIG_FUZZER_LIB 00:08:27.518 #define SPDK_CONFIG_GOLANG 1 00:08:27.518 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:27.518 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:27.518 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:27.518 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:27.518 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:27.518 #define SPDK_CONFIG_IDXD 1 00:08:27.518 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:27.518 #undef SPDK_CONFIG_IPSEC_MB 00:08:27.518 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:27.518 #define SPDK_CONFIG_ISAL 1 00:08:27.518 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:27.518 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:27.518 #define SPDK_CONFIG_LIBDIR 00:08:27.518 #undef SPDK_CONFIG_LTO 00:08:27.518 #define SPDK_CONFIG_MAX_LCORES 00:08:27.518 #define SPDK_CONFIG_NVME_CUSE 1 00:08:27.518 #undef SPDK_CONFIG_OCF 00:08:27.518 #define SPDK_CONFIG_OCF_PATH 00:08:27.518 #define SPDK_CONFIG_OPENSSL_PATH 00:08:27.518 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:27.518 #undef SPDK_CONFIG_PGO_USE 00:08:27.518 #define SPDK_CONFIG_PREFIX /usr/local 00:08:27.519 #undef SPDK_CONFIG_RAID5F 00:08:27.519 #undef SPDK_CONFIG_RBD 00:08:27.519 #define SPDK_CONFIG_RDMA 1 00:08:27.519 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:27.519 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:27.519 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:27.519 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:27.519 #define SPDK_CONFIG_SHARED 1 00:08:27.519 #undef SPDK_CONFIG_SMA 00:08:27.519 #define SPDK_CONFIG_TESTS 1 00:08:27.519 #undef SPDK_CONFIG_TSAN 00:08:27.519 #define SPDK_CONFIG_UBLK 1 00:08:27.519 #define SPDK_CONFIG_UBSAN 1 00:08:27.519 #undef SPDK_CONFIG_UNIT_TESTS 00:08:27.519 #undef SPDK_CONFIG_URING 00:08:27.519 #define SPDK_CONFIG_URING_PATH 00:08:27.519 #undef SPDK_CONFIG_URING_ZNS 00:08:27.519 #define SPDK_CONFIG_USDT 1 00:08:27.519 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:27.519 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:27.519 #undef SPDK_CONFIG_VFIO_USER 00:08:27.519 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:27.519 #define SPDK_CONFIG_VHOST 1 00:08:27.519 #define SPDK_CONFIG_VIRTIO 1 00:08:27.519 #undef SPDK_CONFIG_VTUNE 00:08:27.519 #define SPDK_CONFIG_VTUNE_DIR 00:08:27.519 #define SPDK_CONFIG_WERROR 1 00:08:27.519 #define SPDK_CONFIG_WPDK_DIR 00:08:27.519 #undef SPDK_CONFIG_XNVME 00:08:27.519 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:27.519 14:14:33 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:27.519 14:14:33 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.519 14:14:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.519 14:14:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.519 14:14:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.519 14:14:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.519 14:14:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.519 14:14:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.519 14:14:33 -- paths/export.sh@5 -- # export PATH 00:08:27.519 14:14:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.519 14:14:33 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:27.519 14:14:33 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:27.519 14:14:33 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:27.519 14:14:33 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:27.519 14:14:33 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:27.519 14:14:33 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:27.519 14:14:33 -- pm/common@16 -- # TEST_TAG=N/A 00:08:27.519 14:14:33 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:27.519 14:14:33 -- common/autotest_common.sh@52 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:27.519 14:14:33 -- common/autotest_common.sh@56 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:27.519 14:14:33 -- common/autotest_common.sh@58 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:27.519 14:14:33 -- common/autotest_common.sh@60 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:27.519 14:14:33 -- common/autotest_common.sh@62 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:27.519 14:14:33 -- common/autotest_common.sh@64 -- # : 00:08:27.519 14:14:33 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:27.519 14:14:33 -- common/autotest_common.sh@66 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:27.519 14:14:33 -- common/autotest_common.sh@68 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:27.519 14:14:33 -- common/autotest_common.sh@70 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:27.519 14:14:33 -- common/autotest_common.sh@72 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:27.519 14:14:33 -- common/autotest_common.sh@74 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:27.519 14:14:33 -- common/autotest_common.sh@76 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:27.519 14:14:33 -- common/autotest_common.sh@78 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:27.519 14:14:33 -- common/autotest_common.sh@80 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:27.519 14:14:33 -- common/autotest_common.sh@82 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:27.519 14:14:33 -- common/autotest_common.sh@84 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:27.519 14:14:33 -- common/autotest_common.sh@86 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:27.519 14:14:33 -- common/autotest_common.sh@88 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:27.519 14:14:33 -- common/autotest_common.sh@90 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:27.519 14:14:33 -- common/autotest_common.sh@92 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:27.519 14:14:33 -- common/autotest_common.sh@94 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:27.519 14:14:33 -- common/autotest_common.sh@96 -- # : tcp 00:08:27.519 14:14:33 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:27.519 14:14:33 -- common/autotest_common.sh@98 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:27.519 14:14:33 -- common/autotest_common.sh@100 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:27.519 14:14:33 -- common/autotest_common.sh@102 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:27.519 14:14:33 -- common/autotest_common.sh@104 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:27.519 14:14:33 -- common/autotest_common.sh@106 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:27.519 14:14:33 -- common/autotest_common.sh@108 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:27.519 14:14:33 -- common/autotest_common.sh@110 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:27.519 14:14:33 -- common/autotest_common.sh@112 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:27.519 14:14:33 -- common/autotest_common.sh@114 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:27.519 14:14:33 -- common/autotest_common.sh@116 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:27.519 14:14:33 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:27.519 14:14:33 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:27.519 14:14:33 -- common/autotest_common.sh@120 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:27.519 14:14:33 -- common/autotest_common.sh@122 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:27.519 14:14:33 -- common/autotest_common.sh@124 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:27.519 14:14:33 -- common/autotest_common.sh@126 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:27.519 14:14:33 -- common/autotest_common.sh@128 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:27.519 14:14:33 -- common/autotest_common.sh@130 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:27.519 14:14:33 -- common/autotest_common.sh@132 -- # : v23.11 00:08:27.519 14:14:33 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:27.519 14:14:33 -- common/autotest_common.sh@134 -- # : true 00:08:27.519 14:14:33 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:27.519 14:14:33 -- common/autotest_common.sh@136 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:27.519 14:14:33 -- common/autotest_common.sh@138 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:27.519 14:14:33 -- common/autotest_common.sh@140 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:27.519 14:14:33 -- common/autotest_common.sh@142 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:27.519 14:14:33 -- common/autotest_common.sh@144 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:27.519 14:14:33 -- common/autotest_common.sh@146 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:27.519 14:14:33 -- common/autotest_common.sh@148 -- # : 00:08:27.519 14:14:33 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:27.519 14:14:33 -- common/autotest_common.sh@150 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:27.519 14:14:33 -- common/autotest_common.sh@152 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:27.519 14:14:33 -- common/autotest_common.sh@154 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:27.519 14:14:33 -- common/autotest_common.sh@156 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:27.519 14:14:33 -- common/autotest_common.sh@158 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:27.519 14:14:33 -- common/autotest_common.sh@160 -- # : 0 00:08:27.519 14:14:33 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:27.519 14:14:33 -- common/autotest_common.sh@163 -- # : 00:08:27.519 14:14:33 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:27.519 14:14:33 -- common/autotest_common.sh@165 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:27.519 14:14:33 -- common/autotest_common.sh@167 -- # : 1 00:08:27.519 14:14:33 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:27.519 14:14:33 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:27.519 14:14:33 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:27.519 14:14:33 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.519 14:14:33 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.519 14:14:33 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.519 14:14:33 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.519 14:14:33 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.520 14:14:33 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.520 14:14:33 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:27.520 14:14:33 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:27.520 14:14:33 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:27.520 14:14:33 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:27.520 14:14:33 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:27.520 14:14:33 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:27.520 14:14:33 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:27.520 14:14:33 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:27.520 14:14:33 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:27.520 14:14:33 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:27.520 14:14:33 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:27.520 14:14:33 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:27.520 14:14:33 -- common/autotest_common.sh@196 -- # cat 00:08:27.520 14:14:33 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:27.520 14:14:33 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:27.520 14:14:33 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:27.520 14:14:33 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:27.520 14:14:33 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:27.520 14:14:33 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:27.520 14:14:33 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:27.520 14:14:33 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:27.520 14:14:33 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:27.520 14:14:33 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:27.520 14:14:33 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:27.520 14:14:33 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:27.520 14:14:33 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:27.520 14:14:33 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:27.520 14:14:33 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:27.520 14:14:33 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:27.520 14:14:33 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:27.520 14:14:33 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:27.520 14:14:33 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:27.520 14:14:33 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:27.520 14:14:33 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:27.520 14:14:33 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:27.520 14:14:33 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:27.520 14:14:33 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:27.520 14:14:33 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:27.520 14:14:33 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:27.520 14:14:33 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:27.520 14:14:33 -- common/autotest_common.sh@259 -- # valgrind= 00:08:27.520 14:14:33 -- common/autotest_common.sh@265 -- # uname -s 00:08:27.520 14:14:33 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:27.520 14:14:33 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:27.520 14:14:33 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:27.520 14:14:33 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:27.520 14:14:33 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:27.520 14:14:33 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:27.520 14:14:33 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:27.520 14:14:33 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:27.520 14:14:33 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:27.520 14:14:33 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:27.520 14:14:33 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:27.520 14:14:33 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:27.520 14:14:33 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:27.520 14:14:33 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:27.520 14:14:33 -- common/autotest_common.sh@319 -- # [[ -z 72311 ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@319 -- # kill -0 72311 00:08:27.520 14:14:33 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:27.520 14:14:33 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:27.520 14:14:33 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:27.520 14:14:33 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:27.520 14:14:33 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:27.520 14:14:33 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:27.520 14:14:33 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:27.520 14:14:33 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.GbDfjE 00:08:27.520 14:14:33 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:27.520 14:14:33 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.GbDfjE/tests/target /tmp/spdk.GbDfjE 00:08:27.520 14:14:33 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@328 -- # df -T 00:08:27.520 14:14:33 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293805568 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289383424 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265163776 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293805568 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289383424 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:27.520 14:14:33 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # avails["$mount"]=98360266752 00:08:27.520 14:14:33 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:27.520 14:14:33 -- common/autotest_common.sh@364 -- # uses["$mount"]=1342513152 00:08:27.520 14:14:33 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.520 14:14:33 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:27.520 * Looking for test storage... 00:08:27.520 14:14:33 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:27.520 14:14:33 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:27.520 14:14:33 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.520 14:14:33 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:27.520 14:14:33 -- common/autotest_common.sh@373 -- # mount=/home 00:08:27.520 14:14:33 -- common/autotest_common.sh@375 -- # target_space=13293805568 00:08:27.520 14:14:33 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:27.520 14:14:33 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:27.520 14:14:33 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:27.520 14:14:33 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.520 14:14:33 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.520 14:14:33 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.520 14:14:33 -- common/autotest_common.sh@390 -- # return 0 00:08:27.520 14:14:33 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:27.521 14:14:33 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:27.521 14:14:33 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:27.521 14:14:33 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:27.521 14:14:33 -- common/autotest_common.sh@1682 -- # true 00:08:27.521 14:14:33 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:27.521 14:14:33 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:27.521 14:14:33 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:27.521 14:14:33 -- common/autotest_common.sh@27 -- # exec 00:08:27.521 14:14:33 -- common/autotest_common.sh@29 -- # exec 00:08:27.521 14:14:33 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:27.521 14:14:33 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:27.521 14:14:33 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:27.521 14:14:33 -- common/autotest_common.sh@18 -- # set -x 00:08:27.521 14:14:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.521 14:14:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.521 14:14:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.781 14:14:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.781 14:14:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.781 14:14:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.781 14:14:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.781 14:14:33 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.781 14:14:33 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.781 14:14:33 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.781 14:14:33 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.781 14:14:33 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.781 14:14:33 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.781 14:14:33 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.781 14:14:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.781 14:14:33 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.781 14:14:33 -- scripts/common.sh@344 -- # : 1 00:08:27.781 14:14:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.781 14:14:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.781 14:14:33 -- scripts/common.sh@364 -- # decimal 1 00:08:27.781 14:14:33 -- scripts/common.sh@352 -- # local d=1 00:08:27.781 14:14:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.781 14:14:33 -- scripts/common.sh@354 -- # echo 1 00:08:27.781 14:14:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.781 14:14:33 -- scripts/common.sh@365 -- # decimal 2 00:08:27.781 14:14:33 -- scripts/common.sh@352 -- # local d=2 00:08:27.781 14:14:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.781 14:14:33 -- scripts/common.sh@354 -- # echo 2 00:08:27.781 14:14:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.781 14:14:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.781 14:14:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.781 14:14:33 -- scripts/common.sh@367 -- # return 0 00:08:27.781 14:14:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.781 14:14:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.781 --rc genhtml_branch_coverage=1 00:08:27.781 --rc genhtml_function_coverage=1 00:08:27.781 --rc genhtml_legend=1 00:08:27.781 --rc geninfo_all_blocks=1 00:08:27.781 --rc geninfo_unexecuted_blocks=1 00:08:27.781 00:08:27.781 ' 00:08:27.781 14:14:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.781 --rc genhtml_branch_coverage=1 00:08:27.781 --rc genhtml_function_coverage=1 00:08:27.781 --rc genhtml_legend=1 00:08:27.781 --rc geninfo_all_blocks=1 00:08:27.781 --rc geninfo_unexecuted_blocks=1 00:08:27.781 00:08:27.781 ' 00:08:27.781 14:14:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.781 --rc genhtml_branch_coverage=1 00:08:27.781 --rc genhtml_function_coverage=1 00:08:27.781 --rc genhtml_legend=1 00:08:27.781 --rc geninfo_all_blocks=1 00:08:27.781 --rc geninfo_unexecuted_blocks=1 00:08:27.781 00:08:27.781 ' 00:08:27.781 14:14:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.781 --rc genhtml_branch_coverage=1 00:08:27.781 --rc genhtml_function_coverage=1 00:08:27.781 --rc genhtml_legend=1 00:08:27.781 --rc geninfo_all_blocks=1 00:08:27.781 --rc geninfo_unexecuted_blocks=1 00:08:27.781 00:08:27.781 ' 00:08:27.781 14:14:33 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.781 14:14:33 -- nvmf/common.sh@7 -- # uname -s 00:08:27.781 14:14:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.781 14:14:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.781 14:14:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.781 14:14:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.781 14:14:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.781 14:14:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.781 14:14:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.781 14:14:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.781 14:14:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.781 14:14:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.781 14:14:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:27.781 14:14:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:27.781 14:14:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.781 14:14:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.781 14:14:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.781 14:14:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.781 14:14:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.781 14:14:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.781 14:14:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.781 14:14:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.781 14:14:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.781 14:14:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.781 14:14:33 -- paths/export.sh@5 -- # export PATH 00:08:27.781 14:14:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.781 14:14:33 -- nvmf/common.sh@46 -- # : 0 00:08:27.781 14:14:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:27.781 14:14:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:27.781 14:14:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:27.781 14:14:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.781 14:14:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.781 14:14:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:27.782 14:14:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:27.782 14:14:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:27.782 14:14:33 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:27.782 14:14:33 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:27.782 14:14:33 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:27.782 14:14:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:27.782 14:14:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.782 14:14:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:27.782 14:14:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:27.782 14:14:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:27.782 14:14:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.782 14:14:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.782 14:14:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.782 14:14:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:27.782 14:14:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:27.782 14:14:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:27.782 14:14:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:27.782 14:14:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:27.782 14:14:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:27.782 14:14:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.782 14:14:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.782 14:14:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.782 14:14:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:27.782 14:14:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.782 14:14:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.782 14:14:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.782 14:14:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.782 14:14:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.782 14:14:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.782 14:14:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.782 14:14:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.782 14:14:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:27.782 14:14:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:27.782 Cannot find device "nvmf_tgt_br" 00:08:27.782 14:14:33 -- nvmf/common.sh@154 -- # true 00:08:27.782 14:14:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.782 Cannot find device "nvmf_tgt_br2" 00:08:27.782 14:14:33 -- nvmf/common.sh@155 -- # true 00:08:27.782 14:14:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:27.782 14:14:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:27.782 Cannot find device "nvmf_tgt_br" 00:08:27.782 14:14:33 -- nvmf/common.sh@157 -- # true 00:08:27.782 14:14:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:27.782 Cannot find device "nvmf_tgt_br2" 00:08:27.782 14:14:33 -- nvmf/common.sh@158 -- # true 00:08:27.782 14:14:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:27.782 14:14:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:27.782 14:14:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.782 14:14:33 -- nvmf/common.sh@161 -- # true 00:08:27.782 14:14:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.782 14:14:33 -- nvmf/common.sh@162 -- # true 00:08:27.782 14:14:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.782 14:14:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.782 14:14:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.782 14:14:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.782 14:14:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.041 14:14:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.041 14:14:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.041 14:14:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:28.041 14:14:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:28.041 14:14:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:28.041 14:14:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:28.041 14:14:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:28.041 14:14:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:28.041 14:14:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.041 14:14:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.041 14:14:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.041 14:14:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:28.041 14:14:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:28.041 14:14:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.041 14:14:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.041 14:14:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.041 14:14:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.041 14:14:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.041 14:14:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:28.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:28.041 00:08:28.041 --- 10.0.0.2 ping statistics --- 00:08:28.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.041 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:28.041 14:14:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:28.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:28.041 00:08:28.041 --- 10.0.0.3 ping statistics --- 00:08:28.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.041 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:28.041 14:14:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:28.041 00:08:28.041 --- 10.0.0.1 ping statistics --- 00:08:28.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.041 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:28.041 14:14:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.041 14:14:33 -- nvmf/common.sh@421 -- # return 0 00:08:28.041 14:14:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:28.041 14:14:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.041 14:14:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:28.041 14:14:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:28.041 14:14:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.041 14:14:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:28.041 14:14:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:28.041 14:14:33 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:28.041 14:14:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:28.041 14:14:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.041 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:28.041 ************************************ 00:08:28.041 START TEST nvmf_filesystem_no_in_capsule 00:08:28.041 ************************************ 00:08:28.041 14:14:33 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:28.041 14:14:33 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:28.041 14:14:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:28.041 14:14:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:28.041 14:14:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.041 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:28.041 14:14:33 -- nvmf/common.sh@469 -- # nvmfpid=72491 00:08:28.041 14:14:33 -- nvmf/common.sh@470 -- # waitforlisten 72491 00:08:28.041 14:14:33 -- common/autotest_common.sh@829 -- # '[' -z 72491 ']' 00:08:28.041 14:14:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.041 14:14:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.041 14:14:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.041 14:14:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.041 14:14:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.041 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 [2024-12-05 14:14:33.703971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.300 [2024-12-05 14:14:33.704074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.300 [2024-12-05 14:14:33.829492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.300 [2024-12-05 14:14:33.892732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.300 [2024-12-05 14:14:33.892933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.300 [2024-12-05 14:14:33.892947] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.300 [2024-12-05 14:14:33.892955] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.300 [2024-12-05 14:14:33.893116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.300 [2024-12-05 14:14:33.893287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.300 [2024-12-05 14:14:33.893881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.300 [2024-12-05 14:14:33.893891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.236 14:14:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.236 14:14:34 -- common/autotest_common.sh@862 -- # return 0 00:08:29.236 14:14:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:29.236 14:14:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.236 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.236 14:14:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.236 14:14:34 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:29.236 14:14:34 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:29.236 14:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.236 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.236 [2024-12-05 14:14:34.811283] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.236 14:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.236 14:14:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:29.236 14:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.236 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.494 Malloc1 00:08:29.494 14:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.494 14:14:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:29.494 14:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.494 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.494 14:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.494 14:14:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.494 14:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.494 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.494 14:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.494 14:14:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.494 14:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.494 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.494 [2024-12-05 14:14:34.997748] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.494 14:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.494 14:14:35 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:29.494 14:14:35 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:29.494 14:14:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:29.494 14:14:35 -- common/autotest_common.sh@1369 -- # local bs 00:08:29.494 14:14:35 -- common/autotest_common.sh@1370 -- # local nb 00:08:29.494 14:14:35 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:29.494 14:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.494 14:14:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.494 14:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.494 14:14:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:29.494 { 00:08:29.494 "aliases": [ 00:08:29.494 "7ec333fd-2e67-48ee-8bbd-a4c2eadf7f79" 00:08:29.494 ], 00:08:29.494 "assigned_rate_limits": { 00:08:29.494 "r_mbytes_per_sec": 0, 00:08:29.494 "rw_ios_per_sec": 0, 00:08:29.494 "rw_mbytes_per_sec": 0, 00:08:29.494 "w_mbytes_per_sec": 0 00:08:29.494 }, 00:08:29.494 "block_size": 512, 00:08:29.494 "claim_type": "exclusive_write", 00:08:29.494 "claimed": true, 00:08:29.494 "driver_specific": {}, 00:08:29.494 "memory_domains": [ 00:08:29.494 { 00:08:29.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.494 "dma_device_type": 2 00:08:29.494 } 00:08:29.494 ], 00:08:29.494 "name": "Malloc1", 00:08:29.494 "num_blocks": 1048576, 00:08:29.495 "product_name": "Malloc disk", 00:08:29.495 "supported_io_types": { 00:08:29.495 "abort": true, 00:08:29.495 "compare": false, 00:08:29.495 "compare_and_write": false, 00:08:29.495 "flush": true, 00:08:29.495 "nvme_admin": false, 00:08:29.495 "nvme_io": false, 00:08:29.495 "read": true, 00:08:29.495 "reset": true, 00:08:29.495 "unmap": true, 00:08:29.495 "write": true, 00:08:29.495 "write_zeroes": true 00:08:29.495 }, 00:08:29.495 "uuid": "7ec333fd-2e67-48ee-8bbd-a4c2eadf7f79", 00:08:29.495 "zoned": false 00:08:29.495 } 00:08:29.495 ]' 00:08:29.495 14:14:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:29.495 14:14:35 -- common/autotest_common.sh@1372 -- # bs=512 00:08:29.495 14:14:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:29.495 14:14:35 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:29.495 14:14:35 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:29.495 14:14:35 -- common/autotest_common.sh@1377 -- # echo 512 00:08:29.495 14:14:35 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:29.495 14:14:35 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.752 14:14:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.752 14:14:35 -- common/autotest_common.sh@1187 -- # local i=0 00:08:29.752 14:14:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.752 14:14:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:29.752 14:14:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:32.299 14:14:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:32.299 14:14:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:32.299 14:14:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.299 14:14:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:32.299 14:14:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.299 14:14:37 -- common/autotest_common.sh@1197 -- # return 0 00:08:32.299 14:14:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:32.299 14:14:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:32.299 14:14:37 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:32.299 14:14:37 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:32.299 14:14:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:32.299 14:14:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:32.299 14:14:37 -- setup/common.sh@80 -- # echo 536870912 00:08:32.299 14:14:37 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:32.299 14:14:37 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:32.299 14:14:37 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:32.299 14:14:37 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:32.299 14:14:37 -- target/filesystem.sh@69 -- # partprobe 00:08:32.299 14:14:37 -- target/filesystem.sh@70 -- # sleep 1 00:08:32.865 14:14:38 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:32.865 14:14:38 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.865 14:14:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.865 14:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.865 14:14:38 -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 ************************************ 00:08:32.865 START TEST filesystem_ext4 00:08:32.865 ************************************ 00:08:32.865 14:14:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.865 14:14:38 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.865 14:14:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.865 14:14:38 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.865 14:14:38 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:32.865 14:14:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.865 14:14:38 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.865 14:14:38 -- common/autotest_common.sh@915 -- # local force 00:08:32.865 14:14:38 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:32.865 14:14:38 -- common/autotest_common.sh@918 -- # force=-F 00:08:32.865 14:14:38 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.865 mke2fs 1.47.0 (5-Feb-2023) 00:08:33.123 Discarding device blocks: 0/522240 done 00:08:33.123 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:33.123 Filesystem UUID: 945e73e8-1bb8-49e2-86f8-5e64a2c97ebd 00:08:33.123 Superblock backups stored on blocks: 00:08:33.123 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:33.123 00:08:33.123 Allocating group tables: 0/64 done 00:08:33.123 Writing inode tables: 0/64 done 00:08:33.123 Creating journal (8192 blocks): done 00:08:33.123 Writing superblocks and filesystem accounting information: 0/64 done 00:08:33.123 00:08:33.123 14:14:38 -- common/autotest_common.sh@931 -- # return 0 00:08:33.123 14:14:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.469 14:14:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.469 14:14:44 -- target/filesystem.sh@25 -- # sync 00:08:38.469 14:14:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.469 14:14:44 -- target/filesystem.sh@27 -- # sync 00:08:38.728 14:14:44 -- target/filesystem.sh@29 -- # i=0 00:08:38.728 14:14:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.728 14:14:44 -- target/filesystem.sh@37 -- # kill -0 72491 00:08:38.728 14:14:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.728 14:14:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.728 14:14:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.728 14:14:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.728 00:08:38.728 real 0m5.656s 00:08:38.728 user 0m0.031s 00:08:38.728 sys 0m0.070s 00:08:38.728 14:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.728 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.728 ************************************ 00:08:38.728 END TEST filesystem_ext4 00:08:38.728 ************************************ 00:08:38.728 14:14:44 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:38.728 14:14:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:38.728 14:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.728 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.728 ************************************ 00:08:38.728 START TEST filesystem_btrfs 00:08:38.728 ************************************ 00:08:38.728 14:14:44 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:38.728 14:14:44 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:38.728 14:14:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.728 14:14:44 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:38.728 14:14:44 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:38.728 14:14:44 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:38.729 14:14:44 -- common/autotest_common.sh@914 -- # local i=0 00:08:38.729 14:14:44 -- common/autotest_common.sh@915 -- # local force 00:08:38.729 14:14:44 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:38.729 14:14:44 -- common/autotest_common.sh@920 -- # force=-f 00:08:38.729 14:14:44 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:38.987 btrfs-progs v6.8.1 00:08:38.987 See https://btrfs.readthedocs.io for more information. 00:08:38.987 00:08:38.987 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:38.987 NOTE: several default settings have changed in version 5.15, please make sure 00:08:38.987 this does not affect your deployments: 00:08:38.987 - DUP for metadata (-m dup) 00:08:38.987 - enabled no-holes (-O no-holes) 00:08:38.987 - enabled free-space-tree (-R free-space-tree) 00:08:38.987 00:08:38.987 Label: (null) 00:08:38.987 UUID: b5e5dc1f-ad4f-468d-a621-a8b771f96509 00:08:38.987 Node size: 16384 00:08:38.987 Sector size: 4096 (CPU page size: 4096) 00:08:38.987 Filesystem size: 510.00MiB 00:08:38.987 Block group profiles: 00:08:38.987 Data: single 8.00MiB 00:08:38.987 Metadata: DUP 32.00MiB 00:08:38.987 System: DUP 8.00MiB 00:08:38.987 SSD detected: yes 00:08:38.987 Zoned device: no 00:08:38.987 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:38.987 Checksum: crc32c 00:08:38.987 Number of devices: 1 00:08:38.987 Devices: 00:08:38.987 ID SIZE PATH 00:08:38.987 1 510.00MiB /dev/nvme0n1p1 00:08:38.987 00:08:38.987 14:14:44 -- common/autotest_common.sh@931 -- # return 0 00:08:38.987 14:14:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.987 14:14:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.987 14:14:44 -- target/filesystem.sh@25 -- # sync 00:08:38.987 14:14:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.987 14:14:44 -- target/filesystem.sh@27 -- # sync 00:08:38.987 14:14:44 -- target/filesystem.sh@29 -- # i=0 00:08:38.987 14:14:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.987 14:14:44 -- target/filesystem.sh@37 -- # kill -0 72491 00:08:38.987 14:14:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.987 14:14:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.987 14:14:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.987 14:14:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.987 00:08:38.987 real 0m0.360s 00:08:38.987 user 0m0.024s 00:08:38.987 sys 0m0.066s 00:08:38.987 14:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.987 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.987 ************************************ 00:08:38.987 END TEST filesystem_btrfs 00:08:38.987 ************************************ 00:08:38.987 14:14:44 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:38.987 14:14:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:38.987 14:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.987 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.987 ************************************ 00:08:38.987 START TEST filesystem_xfs 00:08:38.987 ************************************ 00:08:38.987 14:14:44 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:38.987 14:14:44 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:38.987 14:14:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.987 14:14:44 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:38.987 14:14:44 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:38.988 14:14:44 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:38.988 14:14:44 -- common/autotest_common.sh@914 -- # local i=0 00:08:38.988 14:14:44 -- common/autotest_common.sh@915 -- # local force 00:08:38.988 14:14:44 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:38.988 14:14:44 -- common/autotest_common.sh@920 -- # force=-f 00:08:38.988 14:14:44 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:39.246 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:39.246 = sectsz=512 attr=2, projid32bit=1 00:08:39.246 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:39.246 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:39.246 data = bsize=4096 blocks=130560, imaxpct=25 00:08:39.246 = sunit=0 swidth=0 blks 00:08:39.246 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:39.246 log =internal log bsize=4096 blocks=16384, version=2 00:08:39.246 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:39.246 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:39.811 Discarding blocks...Done. 00:08:39.811 14:14:45 -- common/autotest_common.sh@931 -- # return 0 00:08:39.811 14:14:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.400 14:14:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.400 14:14:47 -- target/filesystem.sh@25 -- # sync 00:08:42.400 14:14:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.400 14:14:47 -- target/filesystem.sh@27 -- # sync 00:08:42.400 14:14:47 -- target/filesystem.sh@29 -- # i=0 00:08:42.400 14:14:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.400 14:14:47 -- target/filesystem.sh@37 -- # kill -0 72491 00:08:42.400 14:14:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.400 14:14:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.400 14:14:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.400 14:14:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.400 00:08:42.400 real 0m3.203s 00:08:42.400 user 0m0.021s 00:08:42.400 sys 0m0.067s 00:08:42.400 14:14:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.400 14:14:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.400 ************************************ 00:08:42.400 END TEST filesystem_xfs 00:08:42.400 ************************************ 00:08:42.400 14:14:47 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:42.400 14:14:47 -- target/filesystem.sh@93 -- # sync 00:08:42.400 14:14:47 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.659 14:14:48 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.659 14:14:48 -- common/autotest_common.sh@1208 -- # local i=0 00:08:42.659 14:14:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:42.659 14:14:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.659 14:14:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.659 14:14:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:42.659 14:14:48 -- common/autotest_common.sh@1220 -- # return 0 00:08:42.659 14:14:48 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.659 14:14:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.659 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 14:14:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.659 14:14:48 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:42.659 14:14:48 -- target/filesystem.sh@101 -- # killprocess 72491 00:08:42.659 14:14:48 -- common/autotest_common.sh@936 -- # '[' -z 72491 ']' 00:08:42.659 14:14:48 -- common/autotest_common.sh@940 -- # kill -0 72491 00:08:42.659 14:14:48 -- common/autotest_common.sh@941 -- # uname 00:08:42.659 14:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.659 14:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72491 00:08:42.659 killing process with pid 72491 00:08:42.659 14:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.659 14:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.659 14:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72491' 00:08:42.659 14:14:48 -- common/autotest_common.sh@955 -- # kill 72491 00:08:42.659 14:14:48 -- common/autotest_common.sh@960 -- # wait 72491 00:08:42.917 ************************************ 00:08:42.917 END TEST nvmf_filesystem_no_in_capsule 00:08:42.917 ************************************ 00:08:42.917 14:14:48 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:42.917 00:08:42.917 real 0m14.870s 00:08:42.917 user 0m57.657s 00:08:42.917 sys 0m1.670s 00:08:42.917 14:14:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.917 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.917 14:14:48 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:42.917 14:14:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.917 14:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.917 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 ************************************ 00:08:43.176 START TEST nvmf_filesystem_in_capsule 00:08:43.176 ************************************ 00:08:43.176 14:14:48 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:43.176 14:14:48 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:43.176 14:14:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:43.176 14:14:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.176 14:14:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.176 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 14:14:48 -- nvmf/common.sh@469 -- # nvmfpid=72869 00:08:43.176 14:14:48 -- nvmf/common.sh@470 -- # waitforlisten 72869 00:08:43.176 14:14:48 -- common/autotest_common.sh@829 -- # '[' -z 72869 ']' 00:08:43.176 14:14:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.176 14:14:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.176 14:14:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.176 14:14:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.176 14:14:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.176 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 [2024-12-05 14:14:48.640735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.176 [2024-12-05 14:14:48.640850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.176 [2024-12-05 14:14:48.785920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.436 [2024-12-05 14:14:48.853525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.436 [2024-12-05 14:14:48.854080] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.436 [2024-12-05 14:14:48.854262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.436 [2024-12-05 14:14:48.854468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.436 [2024-12-05 14:14:48.854892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.436 [2024-12-05 14:14:48.855010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.436 [2024-12-05 14:14:48.855087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.436 [2024-12-05 14:14:48.855091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.003 14:14:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.003 14:14:49 -- common/autotest_common.sh@862 -- # return 0 00:08:44.003 14:14:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:44.003 14:14:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.003 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 14:14:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.262 14:14:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:44.262 14:14:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:44.262 14:14:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 [2024-12-05 14:14:49.657774] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.262 14:14:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 14:14:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:44.262 14:14:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 Malloc1 00:08:44.262 14:14:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 14:14:49 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.262 14:14:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 14:14:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 14:14:49 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.262 14:14:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 14:14:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 14:14:49 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.262 14:14:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 [2024-12-05 14:14:49.892726] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.262 14:14:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 14:14:49 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:44.262 14:14:49 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:44.262 14:14:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:44.262 14:14:49 -- common/autotest_common.sh@1369 -- # local bs 00:08:44.262 14:14:49 -- common/autotest_common.sh@1370 -- # local nb 00:08:44.262 14:14:49 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:44.262 14:14:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.521 14:14:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.521 14:14:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:44.521 { 00:08:44.521 "aliases": [ 00:08:44.521 "cb0c78e9-9762-4180-8b9b-6ad9469a1df6" 00:08:44.521 ], 00:08:44.521 "assigned_rate_limits": { 00:08:44.521 "r_mbytes_per_sec": 0, 00:08:44.521 "rw_ios_per_sec": 0, 00:08:44.521 "rw_mbytes_per_sec": 0, 00:08:44.521 "w_mbytes_per_sec": 0 00:08:44.521 }, 00:08:44.521 "block_size": 512, 00:08:44.521 "claim_type": "exclusive_write", 00:08:44.521 "claimed": true, 00:08:44.521 "driver_specific": {}, 00:08:44.521 "memory_domains": [ 00:08:44.521 { 00:08:44.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.521 "dma_device_type": 2 00:08:44.521 } 00:08:44.521 ], 00:08:44.521 "name": "Malloc1", 00:08:44.521 "num_blocks": 1048576, 00:08:44.521 "product_name": "Malloc disk", 00:08:44.521 "supported_io_types": { 00:08:44.521 "abort": true, 00:08:44.521 "compare": false, 00:08:44.521 "compare_and_write": false, 00:08:44.521 "flush": true, 00:08:44.521 "nvme_admin": false, 00:08:44.521 "nvme_io": false, 00:08:44.521 "read": true, 00:08:44.521 "reset": true, 00:08:44.521 "unmap": true, 00:08:44.521 "write": true, 00:08:44.521 "write_zeroes": true 00:08:44.521 }, 00:08:44.521 "uuid": "cb0c78e9-9762-4180-8b9b-6ad9469a1df6", 00:08:44.521 "zoned": false 00:08:44.521 } 00:08:44.521 ]' 00:08:44.521 14:14:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:44.521 14:14:49 -- common/autotest_common.sh@1372 -- # bs=512 00:08:44.521 14:14:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:44.521 14:14:50 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:44.521 14:14:50 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:44.521 14:14:50 -- common/autotest_common.sh@1377 -- # echo 512 00:08:44.521 14:14:50 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:44.521 14:14:50 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.780 14:14:50 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.780 14:14:50 -- common/autotest_common.sh@1187 -- # local i=0 00:08:44.780 14:14:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.780 14:14:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:44.780 14:14:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:46.684 14:14:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:46.684 14:14:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:46.684 14:14:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.684 14:14:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:46.684 14:14:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.684 14:14:52 -- common/autotest_common.sh@1197 -- # return 0 00:08:46.684 14:14:52 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:46.684 14:14:52 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:46.684 14:14:52 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:46.684 14:14:52 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:46.684 14:14:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:46.684 14:14:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:46.684 14:14:52 -- setup/common.sh@80 -- # echo 536870912 00:08:46.684 14:14:52 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:46.684 14:14:52 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:46.684 14:14:52 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:46.684 14:14:52 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:46.684 14:14:52 -- target/filesystem.sh@69 -- # partprobe 00:08:46.943 14:14:52 -- target/filesystem.sh@70 -- # sleep 1 00:08:47.880 14:14:53 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:47.880 14:14:53 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:47.880 14:14:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:47.880 14:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.880 14:14:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.880 ************************************ 00:08:47.880 START TEST filesystem_in_capsule_ext4 00:08:47.880 ************************************ 00:08:47.880 14:14:53 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:47.880 14:14:53 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:47.880 14:14:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:47.880 14:14:53 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:47.880 14:14:53 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:47.880 14:14:53 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:47.880 14:14:53 -- common/autotest_common.sh@914 -- # local i=0 00:08:47.880 14:14:53 -- common/autotest_common.sh@915 -- # local force 00:08:47.880 14:14:53 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:47.880 14:14:53 -- common/autotest_common.sh@918 -- # force=-F 00:08:47.880 14:14:53 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:47.880 mke2fs 1.47.0 (5-Feb-2023) 00:08:48.139 Discarding device blocks: 0/522240 done 00:08:48.139 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:48.139 Filesystem UUID: 7e4da97e-b6bf-4f41-ab08-e9c0ae34f0dc 00:08:48.139 Superblock backups stored on blocks: 00:08:48.139 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:48.139 00:08:48.139 Allocating group tables: 0/64 done 00:08:48.139 Writing inode tables: 0/64 done 00:08:48.139 Creating journal (8192 blocks): done 00:08:48.139 Writing superblocks and filesystem accounting information: 0/64 done 00:08:48.139 00:08:48.139 14:14:53 -- common/autotest_common.sh@931 -- # return 0 00:08:48.139 14:14:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.408 14:14:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.408 14:14:59 -- target/filesystem.sh@25 -- # sync 00:08:53.667 14:14:59 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.667 14:14:59 -- target/filesystem.sh@27 -- # sync 00:08:53.667 14:14:59 -- target/filesystem.sh@29 -- # i=0 00:08:53.667 14:14:59 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.667 14:14:59 -- target/filesystem.sh@37 -- # kill -0 72869 00:08:53.667 14:14:59 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.667 14:14:59 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.667 14:14:59 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.667 14:14:59 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.667 ************************************ 00:08:53.667 END TEST filesystem_in_capsule_ext4 00:08:53.667 ************************************ 00:08:53.667 00:08:53.667 real 0m5.755s 00:08:53.667 user 0m0.030s 00:08:53.667 sys 0m0.056s 00:08:53.667 14:14:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.667 14:14:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.667 14:14:59 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:53.667 14:14:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:53.667 14:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.667 14:14:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.667 ************************************ 00:08:53.667 START TEST filesystem_in_capsule_btrfs 00:08:53.667 ************************************ 00:08:53.667 14:14:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:53.667 14:14:59 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:53.667 14:14:59 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.667 14:14:59 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:53.667 14:14:59 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:53.667 14:14:59 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:53.667 14:14:59 -- common/autotest_common.sh@914 -- # local i=0 00:08:53.667 14:14:59 -- common/autotest_common.sh@915 -- # local force 00:08:53.667 14:14:59 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:53.667 14:14:59 -- common/autotest_common.sh@920 -- # force=-f 00:08:53.667 14:14:59 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:53.926 btrfs-progs v6.8.1 00:08:53.926 See https://btrfs.readthedocs.io for more information. 00:08:53.926 00:08:53.926 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:53.926 NOTE: several default settings have changed in version 5.15, please make sure 00:08:53.926 this does not affect your deployments: 00:08:53.926 - DUP for metadata (-m dup) 00:08:53.926 - enabled no-holes (-O no-holes) 00:08:53.926 - enabled free-space-tree (-R free-space-tree) 00:08:53.926 00:08:53.926 Label: (null) 00:08:53.926 UUID: f36ddfef-fa13-4a77-9201-c4e466f36f3f 00:08:53.926 Node size: 16384 00:08:53.926 Sector size: 4096 (CPU page size: 4096) 00:08:53.926 Filesystem size: 510.00MiB 00:08:53.926 Block group profiles: 00:08:53.926 Data: single 8.00MiB 00:08:53.926 Metadata: DUP 32.00MiB 00:08:53.926 System: DUP 8.00MiB 00:08:53.926 SSD detected: yes 00:08:53.926 Zoned device: no 00:08:53.927 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:53.927 Checksum: crc32c 00:08:53.927 Number of devices: 1 00:08:53.927 Devices: 00:08:53.927 ID SIZE PATH 00:08:53.927 1 510.00MiB /dev/nvme0n1p1 00:08:53.927 00:08:53.927 14:14:59 -- common/autotest_common.sh@931 -- # return 0 00:08:53.927 14:14:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.927 14:14:59 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.927 14:14:59 -- target/filesystem.sh@25 -- # sync 00:08:53.927 14:14:59 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.927 14:14:59 -- target/filesystem.sh@27 -- # sync 00:08:53.927 14:14:59 -- target/filesystem.sh@29 -- # i=0 00:08:53.927 14:14:59 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.927 14:14:59 -- target/filesystem.sh@37 -- # kill -0 72869 00:08:53.927 14:14:59 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.927 14:14:59 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.927 14:14:59 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.927 14:14:59 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.927 ************************************ 00:08:53.927 END TEST filesystem_in_capsule_btrfs 00:08:53.927 ************************************ 00:08:53.927 00:08:53.927 real 0m0.306s 00:08:53.927 user 0m0.017s 00:08:53.927 sys 0m0.067s 00:08:53.927 14:14:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.927 14:14:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.927 14:14:59 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:53.927 14:14:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:53.927 14:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.927 14:14:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.927 ************************************ 00:08:53.927 START TEST filesystem_in_capsule_xfs 00:08:53.927 ************************************ 00:08:53.927 14:14:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:53.927 14:14:59 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:53.927 14:14:59 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.927 14:14:59 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:53.927 14:14:59 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:53.927 14:14:59 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:53.927 14:14:59 -- common/autotest_common.sh@914 -- # local i=0 00:08:53.927 14:14:59 -- common/autotest_common.sh@915 -- # local force 00:08:53.927 14:14:59 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:53.927 14:14:59 -- common/autotest_common.sh@920 -- # force=-f 00:08:53.927 14:14:59 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:54.186 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:54.186 = sectsz=512 attr=2, projid32bit=1 00:08:54.186 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:54.186 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:54.186 data = bsize=4096 blocks=130560, imaxpct=25 00:08:54.186 = sunit=0 swidth=0 blks 00:08:54.186 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:54.186 log =internal log bsize=4096 blocks=16384, version=2 00:08:54.186 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:54.186 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:54.753 Discarding blocks...Done. 00:08:54.753 14:15:00 -- common/autotest_common.sh@931 -- # return 0 00:08:54.753 14:15:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:56.655 14:15:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:56.655 14:15:02 -- target/filesystem.sh@25 -- # sync 00:08:56.655 14:15:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:56.655 14:15:02 -- target/filesystem.sh@27 -- # sync 00:08:56.655 14:15:02 -- target/filesystem.sh@29 -- # i=0 00:08:56.655 14:15:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:56.655 14:15:02 -- target/filesystem.sh@37 -- # kill -0 72869 00:08:56.655 14:15:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:56.655 14:15:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:56.655 14:15:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:56.655 14:15:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:56.655 ************************************ 00:08:56.655 END TEST filesystem_in_capsule_xfs 00:08:56.655 ************************************ 00:08:56.655 00:08:56.655 real 0m2.658s 00:08:56.655 user 0m0.025s 00:08:56.655 sys 0m0.060s 00:08:56.655 14:15:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.655 14:15:02 -- common/autotest_common.sh@10 -- # set +x 00:08:56.655 14:15:02 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:56.655 14:15:02 -- target/filesystem.sh@93 -- # sync 00:08:56.655 14:15:02 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.915 14:15:02 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.915 14:15:02 -- common/autotest_common.sh@1208 -- # local i=0 00:08:56.915 14:15:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:56.915 14:15:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.915 14:15:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:56.915 14:15:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.915 14:15:02 -- common/autotest_common.sh@1220 -- # return 0 00:08:56.915 14:15:02 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.915 14:15:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.915 14:15:02 -- common/autotest_common.sh@10 -- # set +x 00:08:56.915 14:15:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.915 14:15:02 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:56.915 14:15:02 -- target/filesystem.sh@101 -- # killprocess 72869 00:08:56.915 14:15:02 -- common/autotest_common.sh@936 -- # '[' -z 72869 ']' 00:08:56.915 14:15:02 -- common/autotest_common.sh@940 -- # kill -0 72869 00:08:56.915 14:15:02 -- common/autotest_common.sh@941 -- # uname 00:08:56.915 14:15:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.915 14:15:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72869 00:08:56.915 killing process with pid 72869 00:08:56.915 14:15:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:56.915 14:15:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:56.915 14:15:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72869' 00:08:56.915 14:15:02 -- common/autotest_common.sh@955 -- # kill 72869 00:08:56.915 14:15:02 -- common/autotest_common.sh@960 -- # wait 72869 00:08:57.482 ************************************ 00:08:57.482 END TEST nvmf_filesystem_in_capsule 00:08:57.482 ************************************ 00:08:57.482 14:15:03 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:57.482 00:08:57.482 real 0m14.503s 00:08:57.482 user 0m55.974s 00:08:57.482 sys 0m1.660s 00:08:57.482 14:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.482 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:08:57.482 14:15:03 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:57.482 14:15:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:57.482 14:15:03 -- nvmf/common.sh@116 -- # sync 00:08:57.741 14:15:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:57.741 14:15:03 -- nvmf/common.sh@119 -- # set +e 00:08:57.741 14:15:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:57.741 14:15:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:57.741 rmmod nvme_tcp 00:08:57.741 rmmod nvme_fabrics 00:08:57.741 rmmod nvme_keyring 00:08:57.741 14:15:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:57.741 14:15:03 -- nvmf/common.sh@123 -- # set -e 00:08:57.741 14:15:03 -- nvmf/common.sh@124 -- # return 0 00:08:57.741 14:15:03 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:57.741 14:15:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:57.741 14:15:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:57.741 14:15:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:57.741 14:15:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.741 14:15:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:57.741 14:15:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.741 14:15:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.741 14:15:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.741 14:15:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:57.741 00:08:57.741 real 0m30.471s 00:08:57.741 user 1m54.053s 00:08:57.741 sys 0m3.785s 00:08:57.741 14:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.741 ************************************ 00:08:57.741 END TEST nvmf_filesystem 00:08:57.741 ************************************ 00:08:57.741 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:08:57.741 14:15:03 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:57.741 14:15:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:57.741 14:15:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.741 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:08:57.741 ************************************ 00:08:57.741 START TEST nvmf_discovery 00:08:57.741 ************************************ 00:08:57.741 14:15:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:58.000 * Looking for test storage... 00:08:58.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.000 14:15:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:58.000 14:15:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:58.000 14:15:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:58.000 14:15:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:58.000 14:15:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:58.000 14:15:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:58.000 14:15:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:58.000 14:15:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:58.000 14:15:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:58.000 14:15:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.000 14:15:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:58.000 14:15:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:58.000 14:15:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:58.001 14:15:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:58.001 14:15:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:58.001 14:15:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:58.001 14:15:03 -- scripts/common.sh@344 -- # : 1 00:08:58.001 14:15:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:58.001 14:15:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.001 14:15:03 -- scripts/common.sh@364 -- # decimal 1 00:08:58.001 14:15:03 -- scripts/common.sh@352 -- # local d=1 00:08:58.001 14:15:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.001 14:15:03 -- scripts/common.sh@354 -- # echo 1 00:08:58.001 14:15:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:58.001 14:15:03 -- scripts/common.sh@365 -- # decimal 2 00:08:58.001 14:15:03 -- scripts/common.sh@352 -- # local d=2 00:08:58.001 14:15:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.001 14:15:03 -- scripts/common.sh@354 -- # echo 2 00:08:58.001 14:15:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:58.001 14:15:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:58.001 14:15:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:58.001 14:15:03 -- scripts/common.sh@367 -- # return 0 00:08:58.001 14:15:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.001 14:15:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.001 --rc genhtml_branch_coverage=1 00:08:58.001 --rc genhtml_function_coverage=1 00:08:58.001 --rc genhtml_legend=1 00:08:58.001 --rc geninfo_all_blocks=1 00:08:58.001 --rc geninfo_unexecuted_blocks=1 00:08:58.001 00:08:58.001 ' 00:08:58.001 14:15:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.001 --rc genhtml_branch_coverage=1 00:08:58.001 --rc genhtml_function_coverage=1 00:08:58.001 --rc genhtml_legend=1 00:08:58.001 --rc geninfo_all_blocks=1 00:08:58.001 --rc geninfo_unexecuted_blocks=1 00:08:58.001 00:08:58.001 ' 00:08:58.001 14:15:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.001 --rc genhtml_branch_coverage=1 00:08:58.001 --rc genhtml_function_coverage=1 00:08:58.001 --rc genhtml_legend=1 00:08:58.001 --rc geninfo_all_blocks=1 00:08:58.001 --rc geninfo_unexecuted_blocks=1 00:08:58.001 00:08:58.001 ' 00:08:58.001 14:15:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.001 --rc genhtml_branch_coverage=1 00:08:58.001 --rc genhtml_function_coverage=1 00:08:58.001 --rc genhtml_legend=1 00:08:58.001 --rc geninfo_all_blocks=1 00:08:58.001 --rc geninfo_unexecuted_blocks=1 00:08:58.001 00:08:58.001 ' 00:08:58.001 14:15:03 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.001 14:15:03 -- nvmf/common.sh@7 -- # uname -s 00:08:58.001 14:15:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.001 14:15:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.001 14:15:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.001 14:15:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.001 14:15:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.001 14:15:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.001 14:15:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.001 14:15:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.001 14:15:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.001 14:15:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.001 14:15:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:58.001 14:15:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:08:58.001 14:15:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.001 14:15:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.001 14:15:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.001 14:15:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.001 14:15:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.001 14:15:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.001 14:15:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.001 14:15:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.001 14:15:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.001 14:15:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.001 14:15:03 -- paths/export.sh@5 -- # export PATH 00:08:58.001 14:15:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.001 14:15:03 -- nvmf/common.sh@46 -- # : 0 00:08:58.001 14:15:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:58.001 14:15:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:58.001 14:15:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:58.001 14:15:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.001 14:15:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.001 14:15:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:58.001 14:15:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:58.001 14:15:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:58.001 14:15:03 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:58.001 14:15:03 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:58.001 14:15:03 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:58.001 14:15:03 -- target/discovery.sh@15 -- # hash nvme 00:08:58.001 14:15:03 -- target/discovery.sh@20 -- # nvmftestinit 00:08:58.001 14:15:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:58.001 14:15:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.001 14:15:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:58.001 14:15:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:58.001 14:15:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:58.001 14:15:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.001 14:15:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.001 14:15:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.001 14:15:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:58.001 14:15:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:58.001 14:15:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:58.001 14:15:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:58.001 14:15:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:58.001 14:15:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:58.001 14:15:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.001 14:15:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.001 14:15:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.001 14:15:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:58.001 14:15:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.001 14:15:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.001 14:15:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.001 14:15:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.001 14:15:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.001 14:15:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.001 14:15:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.001 14:15:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.001 14:15:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:58.001 14:15:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:58.001 Cannot find device "nvmf_tgt_br" 00:08:58.001 14:15:03 -- nvmf/common.sh@154 -- # true 00:08:58.001 14:15:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.001 Cannot find device "nvmf_tgt_br2" 00:08:58.001 14:15:03 -- nvmf/common.sh@155 -- # true 00:08:58.001 14:15:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:58.001 14:15:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:58.001 Cannot find device "nvmf_tgt_br" 00:08:58.001 14:15:03 -- nvmf/common.sh@157 -- # true 00:08:58.001 14:15:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:58.001 Cannot find device "nvmf_tgt_br2" 00:08:58.001 14:15:03 -- nvmf/common.sh@158 -- # true 00:08:58.001 14:15:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:58.261 14:15:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:58.261 14:15:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.261 14:15:03 -- nvmf/common.sh@161 -- # true 00:08:58.261 14:15:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.261 14:15:03 -- nvmf/common.sh@162 -- # true 00:08:58.261 14:15:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.261 14:15:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.261 14:15:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.261 14:15:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.261 14:15:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.261 14:15:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.261 14:15:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.261 14:15:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:58.261 14:15:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:58.261 14:15:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:58.261 14:15:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:58.261 14:15:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:58.261 14:15:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:58.261 14:15:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.261 14:15:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.261 14:15:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.261 14:15:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:58.261 14:15:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:58.261 14:15:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.261 14:15:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.261 14:15:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.261 14:15:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.261 14:15:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.261 14:15:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:58.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:58.261 00:08:58.261 --- 10.0.0.2 ping statistics --- 00:08:58.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.261 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:58.261 14:15:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:58.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:08:58.261 00:08:58.261 --- 10.0.0.3 ping statistics --- 00:08:58.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.261 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:58.261 14:15:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:58.261 00:08:58.261 --- 10.0.0.1 ping statistics --- 00:08:58.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.261 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:58.261 14:15:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.261 14:15:03 -- nvmf/common.sh@421 -- # return 0 00:08:58.261 14:15:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:58.261 14:15:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.261 14:15:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:58.261 14:15:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:58.261 14:15:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.261 14:15:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:58.261 14:15:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:58.261 14:15:03 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:58.261 14:15:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:58.261 14:15:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.261 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:08:58.261 14:15:03 -- nvmf/common.sh@469 -- # nvmfpid=73423 00:08:58.261 14:15:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.261 14:15:03 -- nvmf/common.sh@470 -- # waitforlisten 73423 00:08:58.261 14:15:03 -- common/autotest_common.sh@829 -- # '[' -z 73423 ']' 00:08:58.261 14:15:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.261 14:15:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.261 14:15:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.261 14:15:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.261 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:08:58.521 [2024-12-05 14:15:03.954843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.521 [2024-12-05 14:15:03.954925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.521 [2024-12-05 14:15:04.087564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.779 [2024-12-05 14:15:04.179278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:58.779 [2024-12-05 14:15:04.179421] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.779 [2024-12-05 14:15:04.179433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.779 [2024-12-05 14:15:04.179441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.779 [2024-12-05 14:15:04.179606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.779 [2024-12-05 14:15:04.179893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.779 [2024-12-05 14:15:04.180373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.779 [2024-12-05 14:15:04.180418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.346 14:15:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.346 14:15:04 -- common/autotest_common.sh@862 -- # return 0 00:08:59.346 14:15:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:59.346 14:15:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.346 14:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.346 14:15:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.346 14:15:04 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.346 14:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.346 14:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.346 [2024-12-05 14:15:04.957957] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.346 14:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.346 14:15:04 -- target/discovery.sh@26 -- # seq 1 4 00:08:59.605 14:15:04 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.605 14:15:04 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:59.605 14:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 Null1 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 [2024-12-05 14:15:05.026693] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.605 14:15:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 Null2 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.605 14:15:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 Null3 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:59.605 14:15:05 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 Null4 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:59.605 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.605 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.605 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.605 14:15:05 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 4420 00:08:59.879 00:08:59.879 Discovery Log Number of Records 6, Generation counter 6 00:08:59.879 =====Discovery Log Entry 0====== 00:08:59.879 trtype: tcp 00:08:59.879 adrfam: ipv4 00:08:59.879 subtype: current discovery subsystem 00:08:59.879 treq: not required 00:08:59.879 portid: 0 00:08:59.879 trsvcid: 4420 00:08:59.879 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:59.879 traddr: 10.0.0.2 00:08:59.879 eflags: explicit discovery connections, duplicate discovery information 00:08:59.879 sectype: none 00:08:59.879 =====Discovery Log Entry 1====== 00:08:59.879 trtype: tcp 00:08:59.879 adrfam: ipv4 00:08:59.879 subtype: nvme subsystem 00:08:59.879 treq: not required 00:08:59.879 portid: 0 00:08:59.879 trsvcid: 4420 00:08:59.879 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:59.879 traddr: 10.0.0.2 00:08:59.879 eflags: none 00:08:59.879 sectype: none 00:08:59.879 =====Discovery Log Entry 2====== 00:08:59.879 trtype: tcp 00:08:59.879 adrfam: ipv4 00:08:59.879 subtype: nvme subsystem 00:08:59.879 treq: not required 00:08:59.879 portid: 0 00:08:59.879 trsvcid: 4420 00:08:59.879 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:59.879 traddr: 10.0.0.2 00:08:59.879 eflags: none 00:08:59.879 sectype: none 00:08:59.879 =====Discovery Log Entry 3====== 00:08:59.879 trtype: tcp 00:08:59.879 adrfam: ipv4 00:08:59.879 subtype: nvme subsystem 00:08:59.879 treq: not required 00:08:59.879 portid: 0 00:08:59.879 trsvcid: 4420 00:08:59.879 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:59.879 traddr: 10.0.0.2 00:08:59.879 eflags: none 00:08:59.879 sectype: none 00:08:59.879 =====Discovery Log Entry 4====== 00:08:59.879 trtype: tcp 00:08:59.879 adrfam: ipv4 00:08:59.879 subtype: nvme subsystem 00:08:59.879 treq: not required 00:08:59.879 portid: 0 00:08:59.879 trsvcid: 4420 00:08:59.879 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:59.879 traddr: 10.0.0.2 00:08:59.879 eflags: none 00:08:59.879 sectype: none 00:08:59.879 =====Discovery Log Entry 5====== 00:08:59.879 trtype: tcp 00:08:59.879 adrfam: ipv4 00:08:59.879 subtype: discovery subsystem referral 00:08:59.879 treq: not required 00:08:59.879 portid: 0 00:08:59.879 trsvcid: 4430 00:08:59.879 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:59.879 traddr: 10.0.0.2 00:08:59.879 eflags: none 00:08:59.879 sectype: none 00:08:59.879 Perform nvmf subsystem discovery via RPC 00:08:59.879 14:15:05 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:59.879 14:15:05 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:59.879 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.879 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.879 [2024-12-05 14:15:05.266917] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:59.879 [ 00:08:59.879 { 00:08:59.879 "allow_any_host": true, 00:08:59.880 "hosts": [], 00:08:59.880 "listen_addresses": [ 00:08:59.880 { 00:08:59.880 "adrfam": "IPv4", 00:08:59.880 "traddr": "10.0.0.2", 00:08:59.880 "transport": "TCP", 00:08:59.880 "trsvcid": "4420", 00:08:59.880 "trtype": "TCP" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:59.880 "subtype": "Discovery" 00:08:59.880 }, 00:08:59.880 { 00:08:59.880 "allow_any_host": true, 00:08:59.880 "hosts": [], 00:08:59.880 "listen_addresses": [ 00:08:59.880 { 00:08:59.880 "adrfam": "IPv4", 00:08:59.880 "traddr": "10.0.0.2", 00:08:59.880 "transport": "TCP", 00:08:59.880 "trsvcid": "4420", 00:08:59.880 "trtype": "TCP" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "max_cntlid": 65519, 00:08:59.880 "max_namespaces": 32, 00:08:59.880 "min_cntlid": 1, 00:08:59.880 "model_number": "SPDK bdev Controller", 00:08:59.880 "namespaces": [ 00:08:59.880 { 00:08:59.880 "bdev_name": "Null1", 00:08:59.880 "name": "Null1", 00:08:59.880 "nguid": "5E4CBE8CC6D742888581E0803D1F737D", 00:08:59.880 "nsid": 1, 00:08:59.880 "uuid": "5e4cbe8c-c6d7-4288-8581-e0803d1f737d" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.880 "serial_number": "SPDK00000000000001", 00:08:59.880 "subtype": "NVMe" 00:08:59.880 }, 00:08:59.880 { 00:08:59.880 "allow_any_host": true, 00:08:59.880 "hosts": [], 00:08:59.880 "listen_addresses": [ 00:08:59.880 { 00:08:59.880 "adrfam": "IPv4", 00:08:59.880 "traddr": "10.0.0.2", 00:08:59.880 "transport": "TCP", 00:08:59.880 "trsvcid": "4420", 00:08:59.880 "trtype": "TCP" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "max_cntlid": 65519, 00:08:59.880 "max_namespaces": 32, 00:08:59.880 "min_cntlid": 1, 00:08:59.880 "model_number": "SPDK bdev Controller", 00:08:59.880 "namespaces": [ 00:08:59.880 { 00:08:59.880 "bdev_name": "Null2", 00:08:59.880 "name": "Null2", 00:08:59.880 "nguid": "A563053A77384A409C1A434CDC66AFD5", 00:08:59.880 "nsid": 1, 00:08:59.880 "uuid": "a563053a-7738-4a40-9c1a-434cdc66afd5" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.880 "serial_number": "SPDK00000000000002", 00:08:59.880 "subtype": "NVMe" 00:08:59.880 }, 00:08:59.880 { 00:08:59.880 "allow_any_host": true, 00:08:59.880 "hosts": [], 00:08:59.880 "listen_addresses": [ 00:08:59.880 { 00:08:59.880 "adrfam": "IPv4", 00:08:59.880 "traddr": "10.0.0.2", 00:08:59.880 "transport": "TCP", 00:08:59.880 "trsvcid": "4420", 00:08:59.880 "trtype": "TCP" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "max_cntlid": 65519, 00:08:59.880 "max_namespaces": 32, 00:08:59.880 "min_cntlid": 1, 00:08:59.880 "model_number": "SPDK bdev Controller", 00:08:59.880 "namespaces": [ 00:08:59.880 { 00:08:59.880 "bdev_name": "Null3", 00:08:59.880 "name": "Null3", 00:08:59.880 "nguid": "ABCB1B8FDAA5447AA0D5BF376969B76F", 00:08:59.880 "nsid": 1, 00:08:59.880 "uuid": "abcb1b8f-daa5-447a-a0d5-bf376969b76f" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:59.880 "serial_number": "SPDK00000000000003", 00:08:59.880 "subtype": "NVMe" 00:08:59.880 }, 00:08:59.880 { 00:08:59.880 "allow_any_host": true, 00:08:59.880 "hosts": [], 00:08:59.880 "listen_addresses": [ 00:08:59.880 { 00:08:59.880 "adrfam": "IPv4", 00:08:59.880 "traddr": "10.0.0.2", 00:08:59.880 "transport": "TCP", 00:08:59.880 "trsvcid": "4420", 00:08:59.880 "trtype": "TCP" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "max_cntlid": 65519, 00:08:59.880 "max_namespaces": 32, 00:08:59.880 "min_cntlid": 1, 00:08:59.880 "model_number": "SPDK bdev Controller", 00:08:59.880 "namespaces": [ 00:08:59.880 { 00:08:59.880 "bdev_name": "Null4", 00:08:59.880 "name": "Null4", 00:08:59.880 "nguid": "0A12D1ED8C2544479687D927CD5CE366", 00:08:59.880 "nsid": 1, 00:08:59.880 "uuid": "0a12d1ed-8c25-4447-9687-d927cd5ce366" 00:08:59.880 } 00:08:59.880 ], 00:08:59.880 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:59.880 "serial_number": "SPDK00000000000004", 00:08:59.880 "subtype": "NVMe" 00:08:59.880 } 00:08:59.880 ] 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@42 -- # seq 1 4 00:08:59.880 14:15:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.880 14:15:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.880 14:15:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.880 14:15:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.880 14:15:05 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:59.880 14:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.880 14:15:05 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:59.880 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.880 14:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.880 14:15:05 -- target/discovery.sh@49 -- # check_bdevs= 00:08:59.880 14:15:05 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:59.880 14:15:05 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:59.880 14:15:05 -- target/discovery.sh@57 -- # nvmftestfini 00:08:59.880 14:15:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:59.880 14:15:05 -- nvmf/common.sh@116 -- # sync 00:08:59.880 14:15:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:59.880 14:15:05 -- nvmf/common.sh@119 -- # set +e 00:08:59.880 14:15:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:59.880 14:15:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:59.880 rmmod nvme_tcp 00:08:59.880 rmmod nvme_fabrics 00:08:59.880 rmmod nvme_keyring 00:08:59.880 14:15:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:59.880 14:15:05 -- nvmf/common.sh@123 -- # set -e 00:08:59.880 14:15:05 -- nvmf/common.sh@124 -- # return 0 00:08:59.880 14:15:05 -- nvmf/common.sh@477 -- # '[' -n 73423 ']' 00:08:59.880 14:15:05 -- nvmf/common.sh@478 -- # killprocess 73423 00:08:59.880 14:15:05 -- common/autotest_common.sh@936 -- # '[' -z 73423 ']' 00:08:59.880 14:15:05 -- common/autotest_common.sh@940 -- # kill -0 73423 00:08:59.880 14:15:05 -- common/autotest_common.sh@941 -- # uname 00:08:59.880 14:15:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.880 14:15:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73423 00:09:00.138 killing process with pid 73423 00:09:00.138 14:15:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.138 14:15:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.138 14:15:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73423' 00:09:00.138 14:15:05 -- common/autotest_common.sh@955 -- # kill 73423 00:09:00.138 [2024-12-05 14:15:05.539535] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:00.138 14:15:05 -- common/autotest_common.sh@960 -- # wait 73423 00:09:00.138 14:15:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:00.138 14:15:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:00.138 14:15:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:00.138 14:15:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.138 14:15:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:00.138 14:15:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.138 14:15:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.138 14:15:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.138 14:15:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:00.138 00:09:00.138 real 0m2.456s 00:09:00.138 user 0m6.527s 00:09:00.138 sys 0m0.702s 00:09:00.138 14:15:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.138 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.138 ************************************ 00:09:00.138 END TEST nvmf_discovery 00:09:00.138 ************************************ 00:09:00.395 14:15:05 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:00.395 14:15:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:00.395 14:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.395 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.395 ************************************ 00:09:00.395 START TEST nvmf_referrals 00:09:00.395 ************************************ 00:09:00.395 14:15:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:00.395 * Looking for test storage... 00:09:00.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.395 14:15:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:00.395 14:15:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:00.395 14:15:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:00.395 14:15:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:00.395 14:15:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:00.395 14:15:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:00.395 14:15:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:00.395 14:15:05 -- scripts/common.sh@335 -- # IFS=.-: 00:09:00.395 14:15:05 -- scripts/common.sh@335 -- # read -ra ver1 00:09:00.395 14:15:05 -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.395 14:15:05 -- scripts/common.sh@336 -- # read -ra ver2 00:09:00.395 14:15:05 -- scripts/common.sh@337 -- # local 'op=<' 00:09:00.395 14:15:05 -- scripts/common.sh@339 -- # ver1_l=2 00:09:00.395 14:15:06 -- scripts/common.sh@340 -- # ver2_l=1 00:09:00.395 14:15:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:00.395 14:15:06 -- scripts/common.sh@343 -- # case "$op" in 00:09:00.395 14:15:06 -- scripts/common.sh@344 -- # : 1 00:09:00.395 14:15:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:00.395 14:15:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.395 14:15:06 -- scripts/common.sh@364 -- # decimal 1 00:09:00.395 14:15:06 -- scripts/common.sh@352 -- # local d=1 00:09:00.395 14:15:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.395 14:15:06 -- scripts/common.sh@354 -- # echo 1 00:09:00.395 14:15:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:00.395 14:15:06 -- scripts/common.sh@365 -- # decimal 2 00:09:00.395 14:15:06 -- scripts/common.sh@352 -- # local d=2 00:09:00.395 14:15:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.395 14:15:06 -- scripts/common.sh@354 -- # echo 2 00:09:00.395 14:15:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:00.395 14:15:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:00.395 14:15:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:00.395 14:15:06 -- scripts/common.sh@367 -- # return 0 00:09:00.395 14:15:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.395 14:15:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:00.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.395 --rc genhtml_branch_coverage=1 00:09:00.395 --rc genhtml_function_coverage=1 00:09:00.395 --rc genhtml_legend=1 00:09:00.395 --rc geninfo_all_blocks=1 00:09:00.395 --rc geninfo_unexecuted_blocks=1 00:09:00.395 00:09:00.395 ' 00:09:00.395 14:15:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:00.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.395 --rc genhtml_branch_coverage=1 00:09:00.395 --rc genhtml_function_coverage=1 00:09:00.395 --rc genhtml_legend=1 00:09:00.395 --rc geninfo_all_blocks=1 00:09:00.395 --rc geninfo_unexecuted_blocks=1 00:09:00.395 00:09:00.395 ' 00:09:00.395 14:15:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:00.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.395 --rc genhtml_branch_coverage=1 00:09:00.395 --rc genhtml_function_coverage=1 00:09:00.395 --rc genhtml_legend=1 00:09:00.395 --rc geninfo_all_blocks=1 00:09:00.395 --rc geninfo_unexecuted_blocks=1 00:09:00.395 00:09:00.395 ' 00:09:00.395 14:15:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:00.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.395 --rc genhtml_branch_coverage=1 00:09:00.395 --rc genhtml_function_coverage=1 00:09:00.395 --rc genhtml_legend=1 00:09:00.395 --rc geninfo_all_blocks=1 00:09:00.395 --rc geninfo_unexecuted_blocks=1 00:09:00.395 00:09:00.395 ' 00:09:00.395 14:15:06 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.395 14:15:06 -- nvmf/common.sh@7 -- # uname -s 00:09:00.395 14:15:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.395 14:15:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.395 14:15:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.395 14:15:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.395 14:15:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.395 14:15:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.395 14:15:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.395 14:15:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.395 14:15:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.395 14:15:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.395 14:15:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:09:00.395 14:15:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:09:00.395 14:15:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.395 14:15:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.395 14:15:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.395 14:15:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.395 14:15:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.395 14:15:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.395 14:15:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.395 14:15:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.395 14:15:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.395 14:15:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.395 14:15:06 -- paths/export.sh@5 -- # export PATH 00:09:00.395 14:15:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.395 14:15:06 -- nvmf/common.sh@46 -- # : 0 00:09:00.395 14:15:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:00.395 14:15:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:00.395 14:15:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:00.395 14:15:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.652 14:15:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.652 14:15:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:00.652 14:15:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:00.652 14:15:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:00.652 14:15:06 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:00.652 14:15:06 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:00.652 14:15:06 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:00.652 14:15:06 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:00.652 14:15:06 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:00.652 14:15:06 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:00.652 14:15:06 -- target/referrals.sh@37 -- # nvmftestinit 00:09:00.652 14:15:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:00.652 14:15:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.652 14:15:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:00.652 14:15:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:00.652 14:15:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:00.652 14:15:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.652 14:15:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.652 14:15:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.652 14:15:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:00.652 14:15:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:00.652 14:15:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:00.652 14:15:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:00.652 14:15:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:00.652 14:15:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:00.652 14:15:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.652 14:15:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.652 14:15:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:00.652 14:15:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:00.652 14:15:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.652 14:15:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.652 14:15:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.652 14:15:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.652 14:15:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.652 14:15:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.652 14:15:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.652 14:15:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.652 14:15:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:00.652 14:15:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:00.652 Cannot find device "nvmf_tgt_br" 00:09:00.652 14:15:06 -- nvmf/common.sh@154 -- # true 00:09:00.652 14:15:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.652 Cannot find device "nvmf_tgt_br2" 00:09:00.652 14:15:06 -- nvmf/common.sh@155 -- # true 00:09:00.652 14:15:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:00.652 14:15:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:00.652 Cannot find device "nvmf_tgt_br" 00:09:00.652 14:15:06 -- nvmf/common.sh@157 -- # true 00:09:00.652 14:15:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:00.652 Cannot find device "nvmf_tgt_br2" 00:09:00.652 14:15:06 -- nvmf/common.sh@158 -- # true 00:09:00.652 14:15:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:00.652 14:15:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:00.652 14:15:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.652 14:15:06 -- nvmf/common.sh@161 -- # true 00:09:00.652 14:15:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.652 14:15:06 -- nvmf/common.sh@162 -- # true 00:09:00.653 14:15:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.653 14:15:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.653 14:15:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.653 14:15:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.653 14:15:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.653 14:15:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.653 14:15:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.653 14:15:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:00.653 14:15:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:00.653 14:15:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:00.653 14:15:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:00.653 14:15:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:00.653 14:15:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:00.653 14:15:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.653 14:15:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.910 14:15:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.910 14:15:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:00.910 14:15:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:00.910 14:15:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.910 14:15:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.910 14:15:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.910 14:15:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.910 14:15:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:00.910 14:15:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:00.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:00.910 00:09:00.910 --- 10.0.0.2 ping statistics --- 00:09:00.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.910 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:00.910 14:15:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:00.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:00.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:00.910 00:09:00.910 --- 10.0.0.3 ping statistics --- 00:09:00.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.910 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:00.910 14:15:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:00.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:00.910 00:09:00.910 --- 10.0.0.1 ping statistics --- 00:09:00.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.910 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:00.910 14:15:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.910 14:15:06 -- nvmf/common.sh@421 -- # return 0 00:09:00.910 14:15:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:00.910 14:15:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.910 14:15:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:00.910 14:15:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:00.910 14:15:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.910 14:15:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:00.910 14:15:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:00.910 14:15:06 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:00.910 14:15:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:00.910 14:15:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.910 14:15:06 -- common/autotest_common.sh@10 -- # set +x 00:09:00.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.910 14:15:06 -- nvmf/common.sh@469 -- # nvmfpid=73652 00:09:00.910 14:15:06 -- nvmf/common.sh@470 -- # waitforlisten 73652 00:09:00.910 14:15:06 -- common/autotest_common.sh@829 -- # '[' -z 73652 ']' 00:09:00.910 14:15:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.910 14:15:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.910 14:15:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.910 14:15:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.910 14:15:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.910 14:15:06 -- common/autotest_common.sh@10 -- # set +x 00:09:00.910 [2024-12-05 14:15:06.458206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.910 [2024-12-05 14:15:06.458445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.168 [2024-12-05 14:15:06.594045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.168 [2024-12-05 14:15:06.653685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.168 [2024-12-05 14:15:06.654169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.168 [2024-12-05 14:15:06.654227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.168 [2024-12-05 14:15:06.654388] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.168 [2024-12-05 14:15:06.654534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.168 [2024-12-05 14:15:06.654675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.168 [2024-12-05 14:15:06.654867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.168 [2024-12-05 14:15:06.654871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.154 14:15:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.154 14:15:07 -- common/autotest_common.sh@862 -- # return 0 00:09:02.154 14:15:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:02.154 14:15:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 14:15:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.154 14:15:07 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 [2024-12-05 14:15:07.535365] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 [2024-12-05 14:15:07.559757] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:02.154 14:15:07 -- target/referrals.sh@48 -- # jq length 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:02.154 14:15:07 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:02.154 14:15:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:02.154 14:15:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:02.154 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 14:15:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:02.154 14:15:07 -- target/referrals.sh@21 -- # sort 00:09:02.154 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:02.154 14:15:07 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:02.154 14:15:07 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:02.154 14:15:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:02.155 14:15:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:02.155 14:15:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:02.155 14:15:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:02.155 14:15:07 -- target/referrals.sh@26 -- # sort 00:09:02.412 14:15:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:02.412 14:15:07 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:02.412 14:15:07 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:02.412 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.413 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.413 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.413 14:15:07 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:02.413 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.413 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.413 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.413 14:15:07 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:02.413 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.413 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.413 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.413 14:15:07 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:02.413 14:15:07 -- target/referrals.sh@56 -- # jq length 00:09:02.413 14:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.413 14:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.413 14:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.413 14:15:07 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:02.413 14:15:07 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:02.413 14:15:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:02.413 14:15:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:02.413 14:15:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:02.413 14:15:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:02.413 14:15:07 -- target/referrals.sh@26 -- # sort 00:09:02.413 14:15:08 -- target/referrals.sh@26 -- # echo 00:09:02.413 14:15:08 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:02.413 14:15:08 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:02.413 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.413 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:02.413 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.413 14:15:08 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:02.413 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.413 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:02.671 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.671 14:15:08 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:02.671 14:15:08 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:02.671 14:15:08 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:02.671 14:15:08 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:02.671 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.671 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:02.671 14:15:08 -- target/referrals.sh@21 -- # sort 00:09:02.672 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.672 14:15:08 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:02.672 14:15:08 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:02.672 14:15:08 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:02.672 14:15:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:02.672 14:15:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:02.672 14:15:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:02.672 14:15:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:02.672 14:15:08 -- target/referrals.sh@26 -- # sort 00:09:02.672 14:15:08 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:02.672 14:15:08 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:02.672 14:15:08 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:02.672 14:15:08 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:02.672 14:15:08 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:02.672 14:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:02.672 14:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:02.930 14:15:08 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:02.930 14:15:08 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:02.930 14:15:08 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:02.930 14:15:08 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:02.930 14:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:02.930 14:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:02.930 14:15:08 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:02.930 14:15:08 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:02.930 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.930 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:02.930 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.930 14:15:08 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:02.930 14:15:08 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:02.930 14:15:08 -- target/referrals.sh@21 -- # sort 00:09:02.930 14:15:08 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:02.930 14:15:08 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:02.930 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.930 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:02.930 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.930 14:15:08 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:02.930 14:15:08 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:02.930 14:15:08 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:02.930 14:15:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:02.930 14:15:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:02.930 14:15:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:02.930 14:15:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:02.930 14:15:08 -- target/referrals.sh@26 -- # sort 00:09:03.187 14:15:08 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:03.187 14:15:08 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:03.187 14:15:08 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:03.187 14:15:08 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:03.187 14:15:08 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:03.187 14:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:03.187 14:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:03.187 14:15:08 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:03.187 14:15:08 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:03.187 14:15:08 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:03.187 14:15:08 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:03.187 14:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:03.187 14:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:03.445 14:15:08 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:03.445 14:15:08 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:03.445 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.445 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.445 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.445 14:15:08 -- target/referrals.sh@82 -- # jq length 00:09:03.445 14:15:08 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:03.445 14:15:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.445 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.445 14:15:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.445 14:15:08 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:03.445 14:15:08 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:03.445 14:15:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:03.445 14:15:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:03.446 14:15:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:03.446 14:15:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:03.446 14:15:08 -- target/referrals.sh@26 -- # sort 00:09:03.704 14:15:09 -- target/referrals.sh@26 -- # echo 00:09:03.704 14:15:09 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:03.704 14:15:09 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:03.704 14:15:09 -- target/referrals.sh@86 -- # nvmftestfini 00:09:03.704 14:15:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:03.704 14:15:09 -- nvmf/common.sh@116 -- # sync 00:09:03.704 14:15:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:03.704 14:15:09 -- nvmf/common.sh@119 -- # set +e 00:09:03.704 14:15:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:03.704 14:15:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:03.704 rmmod nvme_tcp 00:09:03.704 rmmod nvme_fabrics 00:09:03.704 rmmod nvme_keyring 00:09:03.704 14:15:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:03.704 14:15:09 -- nvmf/common.sh@123 -- # set -e 00:09:03.704 14:15:09 -- nvmf/common.sh@124 -- # return 0 00:09:03.704 14:15:09 -- nvmf/common.sh@477 -- # '[' -n 73652 ']' 00:09:03.704 14:15:09 -- nvmf/common.sh@478 -- # killprocess 73652 00:09:03.704 14:15:09 -- common/autotest_common.sh@936 -- # '[' -z 73652 ']' 00:09:03.704 14:15:09 -- common/autotest_common.sh@940 -- # kill -0 73652 00:09:03.704 14:15:09 -- common/autotest_common.sh@941 -- # uname 00:09:03.704 14:15:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.704 14:15:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73652 00:09:03.704 14:15:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.704 14:15:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.704 14:15:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73652' 00:09:03.704 killing process with pid 73652 00:09:03.704 14:15:09 -- common/autotest_common.sh@955 -- # kill 73652 00:09:03.704 14:15:09 -- common/autotest_common.sh@960 -- # wait 73652 00:09:03.962 14:15:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:03.962 14:15:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:03.962 14:15:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:03.962 14:15:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.962 14:15:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:03.962 14:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.962 14:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.962 14:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.962 14:15:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:03.962 00:09:03.962 real 0m3.673s 00:09:03.962 user 0m12.278s 00:09:03.962 sys 0m0.934s 00:09:03.962 14:15:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.962 14:15:09 -- common/autotest_common.sh@10 -- # set +x 00:09:03.962 ************************************ 00:09:03.962 END TEST nvmf_referrals 00:09:03.962 ************************************ 00:09:03.962 14:15:09 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:03.962 14:15:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:03.962 14:15:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.962 14:15:09 -- common/autotest_common.sh@10 -- # set +x 00:09:03.962 ************************************ 00:09:03.962 START TEST nvmf_connect_disconnect 00:09:03.962 ************************************ 00:09:03.962 14:15:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:04.221 * Looking for test storage... 00:09:04.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.221 14:15:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.222 14:15:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.222 14:15:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.222 14:15:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.222 14:15:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.222 14:15:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.222 14:15:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.222 14:15:09 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.222 14:15:09 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.222 14:15:09 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.222 14:15:09 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.222 14:15:09 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.222 14:15:09 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.222 14:15:09 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.222 14:15:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.222 14:15:09 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.222 14:15:09 -- scripts/common.sh@344 -- # : 1 00:09:04.222 14:15:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.222 14:15:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.222 14:15:09 -- scripts/common.sh@364 -- # decimal 1 00:09:04.222 14:15:09 -- scripts/common.sh@352 -- # local d=1 00:09:04.222 14:15:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.222 14:15:09 -- scripts/common.sh@354 -- # echo 1 00:09:04.222 14:15:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.222 14:15:09 -- scripts/common.sh@365 -- # decimal 2 00:09:04.222 14:15:09 -- scripts/common.sh@352 -- # local d=2 00:09:04.222 14:15:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.222 14:15:09 -- scripts/common.sh@354 -- # echo 2 00:09:04.222 14:15:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.222 14:15:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.222 14:15:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.222 14:15:09 -- scripts/common.sh@367 -- # return 0 00:09:04.222 14:15:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.222 14:15:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.222 --rc genhtml_branch_coverage=1 00:09:04.222 --rc genhtml_function_coverage=1 00:09:04.222 --rc genhtml_legend=1 00:09:04.222 --rc geninfo_all_blocks=1 00:09:04.222 --rc geninfo_unexecuted_blocks=1 00:09:04.222 00:09:04.222 ' 00:09:04.222 14:15:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.222 --rc genhtml_branch_coverage=1 00:09:04.222 --rc genhtml_function_coverage=1 00:09:04.222 --rc genhtml_legend=1 00:09:04.222 --rc geninfo_all_blocks=1 00:09:04.222 --rc geninfo_unexecuted_blocks=1 00:09:04.222 00:09:04.222 ' 00:09:04.222 14:15:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.222 --rc genhtml_branch_coverage=1 00:09:04.222 --rc genhtml_function_coverage=1 00:09:04.222 --rc genhtml_legend=1 00:09:04.222 --rc geninfo_all_blocks=1 00:09:04.222 --rc geninfo_unexecuted_blocks=1 00:09:04.222 00:09:04.222 ' 00:09:04.222 14:15:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.222 --rc genhtml_branch_coverage=1 00:09:04.222 --rc genhtml_function_coverage=1 00:09:04.222 --rc genhtml_legend=1 00:09:04.222 --rc geninfo_all_blocks=1 00:09:04.222 --rc geninfo_unexecuted_blocks=1 00:09:04.222 00:09:04.222 ' 00:09:04.222 14:15:09 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.222 14:15:09 -- nvmf/common.sh@7 -- # uname -s 00:09:04.222 14:15:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.222 14:15:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.222 14:15:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.222 14:15:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.222 14:15:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.222 14:15:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.222 14:15:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.222 14:15:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.222 14:15:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.222 14:15:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.222 14:15:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:09:04.222 14:15:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:09:04.222 14:15:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.222 14:15:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.222 14:15:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.222 14:15:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.222 14:15:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.222 14:15:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.222 14:15:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.222 14:15:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.222 14:15:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.222 14:15:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.222 14:15:09 -- paths/export.sh@5 -- # export PATH 00:09:04.222 14:15:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.222 14:15:09 -- nvmf/common.sh@46 -- # : 0 00:09:04.222 14:15:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:04.222 14:15:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:04.222 14:15:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:04.222 14:15:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.222 14:15:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.222 14:15:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:04.222 14:15:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:04.222 14:15:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:04.222 14:15:09 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.223 14:15:09 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.223 14:15:09 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:04.223 14:15:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:04.223 14:15:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.223 14:15:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:04.223 14:15:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:04.223 14:15:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:04.223 14:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.223 14:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.223 14:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.223 14:15:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:04.223 14:15:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:04.223 14:15:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:04.223 14:15:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:04.223 14:15:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:04.223 14:15:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:04.223 14:15:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.223 14:15:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.223 14:15:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:04.223 14:15:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:04.223 14:15:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.223 14:15:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.223 14:15:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.223 14:15:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.223 14:15:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.223 14:15:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.223 14:15:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.223 14:15:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.223 14:15:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:04.223 14:15:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:04.223 Cannot find device "nvmf_tgt_br" 00:09:04.223 14:15:09 -- nvmf/common.sh@154 -- # true 00:09:04.223 14:15:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.223 Cannot find device "nvmf_tgt_br2" 00:09:04.223 14:15:09 -- nvmf/common.sh@155 -- # true 00:09:04.223 14:15:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:04.223 14:15:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:04.223 Cannot find device "nvmf_tgt_br" 00:09:04.223 14:15:09 -- nvmf/common.sh@157 -- # true 00:09:04.223 14:15:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:04.223 Cannot find device "nvmf_tgt_br2" 00:09:04.223 14:15:09 -- nvmf/common.sh@158 -- # true 00:09:04.223 14:15:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:04.482 14:15:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:04.482 14:15:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.482 14:15:09 -- nvmf/common.sh@161 -- # true 00:09:04.482 14:15:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.482 14:15:09 -- nvmf/common.sh@162 -- # true 00:09:04.482 14:15:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.482 14:15:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.482 14:15:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.482 14:15:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.482 14:15:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.482 14:15:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.482 14:15:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.482 14:15:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:04.482 14:15:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:04.482 14:15:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:04.482 14:15:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:04.482 14:15:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:04.482 14:15:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:04.482 14:15:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.482 14:15:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.482 14:15:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.482 14:15:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:04.482 14:15:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:04.482 14:15:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.482 14:15:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.482 14:15:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.482 14:15:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.482 14:15:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.482 14:15:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:04.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:04.482 00:09:04.482 --- 10.0.0.2 ping statistics --- 00:09:04.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.482 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:04.482 14:15:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:04.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:04.482 00:09:04.482 --- 10.0.0.3 ping statistics --- 00:09:04.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.482 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:04.482 14:15:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:04.482 00:09:04.482 --- 10.0.0.1 ping statistics --- 00:09:04.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.482 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:04.482 14:15:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.482 14:15:10 -- nvmf/common.sh@421 -- # return 0 00:09:04.482 14:15:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:04.482 14:15:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.482 14:15:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:04.482 14:15:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:04.482 14:15:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.482 14:15:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:04.482 14:15:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:04.482 14:15:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:04.482 14:15:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:04.482 14:15:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.482 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:09:04.741 14:15:10 -- nvmf/common.sh@469 -- # nvmfpid=73968 00:09:04.741 14:15:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.741 14:15:10 -- nvmf/common.sh@470 -- # waitforlisten 73968 00:09:04.741 14:15:10 -- common/autotest_common.sh@829 -- # '[' -z 73968 ']' 00:09:04.741 14:15:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.741 14:15:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.741 14:15:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.741 14:15:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.741 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:09:04.741 [2024-12-05 14:15:10.179746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.741 [2024-12-05 14:15:10.179824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.741 [2024-12-05 14:15:10.314377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.741 [2024-12-05 14:15:10.374061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.741 [2024-12-05 14:15:10.374211] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.741 [2024-12-05 14:15:10.374226] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.741 [2024-12-05 14:15:10.374234] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.741 [2024-12-05 14:15:10.374393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.741 [2024-12-05 14:15:10.374528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.741 [2024-12-05 14:15:10.375217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.741 [2024-12-05 14:15:10.375269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.677 14:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.677 14:15:11 -- common/autotest_common.sh@862 -- # return 0 00:09:05.677 14:15:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:05.677 14:15:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.677 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.677 14:15:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.677 14:15:11 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:05.677 14:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.677 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.677 [2024-12-05 14:15:11.272117] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.677 14:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.677 14:15:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:05.677 14:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.677 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.677 14:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.677 14:15:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:05.677 14:15:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.677 14:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.677 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.936 14:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.936 14:15:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.936 14:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.936 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.936 14:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.936 14:15:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.936 14:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.936 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.936 [2024-12-05 14:15:11.338064] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.936 14:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.936 14:15:11 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:05.936 14:15:11 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:05.936 14:15:11 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:05.936 14:15:11 -- target/connect_disconnect.sh@34 -- # set +x 00:09:08.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.624 14:18:56 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:51.625 14:18:56 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:51.625 14:18:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.625 14:18:56 -- nvmf/common.sh@116 -- # sync 00:12:51.625 14:18:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.625 14:18:56 -- nvmf/common.sh@119 -- # set +e 00:12:51.625 14:18:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.625 14:18:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.625 rmmod nvme_tcp 00:12:51.625 rmmod nvme_fabrics 00:12:51.625 rmmod nvme_keyring 00:12:51.625 14:18:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.625 14:18:56 -- nvmf/common.sh@123 -- # set -e 00:12:51.625 14:18:56 -- nvmf/common.sh@124 -- # return 0 00:12:51.625 14:18:56 -- nvmf/common.sh@477 -- # '[' -n 73968 ']' 00:12:51.625 14:18:56 -- nvmf/common.sh@478 -- # killprocess 73968 00:12:51.625 14:18:56 -- common/autotest_common.sh@936 -- # '[' -z 73968 ']' 00:12:51.625 14:18:56 -- common/autotest_common.sh@940 -- # kill -0 73968 00:12:51.625 14:18:56 -- common/autotest_common.sh@941 -- # uname 00:12:51.625 14:18:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.625 14:18:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73968 00:12:51.625 14:18:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:51.625 14:18:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:51.625 killing process with pid 73968 00:12:51.625 14:18:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73968' 00:12:51.625 14:18:56 -- common/autotest_common.sh@955 -- # kill 73968 00:12:51.625 14:18:56 -- common/autotest_common.sh@960 -- # wait 73968 00:12:51.625 14:18:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:51.625 14:18:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:51.625 14:18:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:51.625 14:18:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.625 14:18:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:51.625 14:18:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.625 14:18:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.625 14:18:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.625 14:18:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:51.625 00:12:51.625 real 3m47.487s 00:12:51.625 user 14m49.781s 00:12:51.625 sys 0m18.723s 00:12:51.625 14:18:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:51.625 14:18:57 -- common/autotest_common.sh@10 -- # set +x 00:12:51.625 ************************************ 00:12:51.625 END TEST nvmf_connect_disconnect 00:12:51.625 ************************************ 00:12:51.625 14:18:57 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:51.625 14:18:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:51.625 14:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.625 14:18:57 -- common/autotest_common.sh@10 -- # set +x 00:12:51.625 ************************************ 00:12:51.625 START TEST nvmf_multitarget 00:12:51.625 ************************************ 00:12:51.625 14:18:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:51.625 * Looking for test storage... 00:12:51.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:51.625 14:18:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:51.625 14:18:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:51.625 14:18:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:51.625 14:18:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:51.625 14:18:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:51.625 14:18:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:51.625 14:18:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:51.625 14:18:57 -- scripts/common.sh@335 -- # IFS=.-: 00:12:51.625 14:18:57 -- scripts/common.sh@335 -- # read -ra ver1 00:12:51.625 14:18:57 -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.625 14:18:57 -- scripts/common.sh@336 -- # read -ra ver2 00:12:51.625 14:18:57 -- scripts/common.sh@337 -- # local 'op=<' 00:12:51.625 14:18:57 -- scripts/common.sh@339 -- # ver1_l=2 00:12:51.625 14:18:57 -- scripts/common.sh@340 -- # ver2_l=1 00:12:51.625 14:18:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:51.625 14:18:57 -- scripts/common.sh@343 -- # case "$op" in 00:12:51.625 14:18:57 -- scripts/common.sh@344 -- # : 1 00:12:51.625 14:18:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:51.625 14:18:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.625 14:18:57 -- scripts/common.sh@364 -- # decimal 1 00:12:51.625 14:18:57 -- scripts/common.sh@352 -- # local d=1 00:12:51.625 14:18:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.625 14:18:57 -- scripts/common.sh@354 -- # echo 1 00:12:51.625 14:18:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:51.625 14:18:57 -- scripts/common.sh@365 -- # decimal 2 00:12:51.625 14:18:57 -- scripts/common.sh@352 -- # local d=2 00:12:51.625 14:18:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.625 14:18:57 -- scripts/common.sh@354 -- # echo 2 00:12:51.625 14:18:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:51.625 14:18:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:51.625 14:18:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:51.625 14:18:57 -- scripts/common.sh@367 -- # return 0 00:12:51.625 14:18:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.625 14:18:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:51.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.625 --rc genhtml_branch_coverage=1 00:12:51.625 --rc genhtml_function_coverage=1 00:12:51.625 --rc genhtml_legend=1 00:12:51.625 --rc geninfo_all_blocks=1 00:12:51.625 --rc geninfo_unexecuted_blocks=1 00:12:51.625 00:12:51.625 ' 00:12:51.625 14:18:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:51.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.625 --rc genhtml_branch_coverage=1 00:12:51.625 --rc genhtml_function_coverage=1 00:12:51.625 --rc genhtml_legend=1 00:12:51.625 --rc geninfo_all_blocks=1 00:12:51.625 --rc geninfo_unexecuted_blocks=1 00:12:51.625 00:12:51.625 ' 00:12:51.625 14:18:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:51.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.625 --rc genhtml_branch_coverage=1 00:12:51.625 --rc genhtml_function_coverage=1 00:12:51.625 --rc genhtml_legend=1 00:12:51.625 --rc geninfo_all_blocks=1 00:12:51.625 --rc geninfo_unexecuted_blocks=1 00:12:51.625 00:12:51.625 ' 00:12:51.625 14:18:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:51.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.625 --rc genhtml_branch_coverage=1 00:12:51.625 --rc genhtml_function_coverage=1 00:12:51.625 --rc genhtml_legend=1 00:12:51.625 --rc geninfo_all_blocks=1 00:12:51.625 --rc geninfo_unexecuted_blocks=1 00:12:51.625 00:12:51.625 ' 00:12:51.625 14:18:57 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:51.625 14:18:57 -- nvmf/common.sh@7 -- # uname -s 00:12:51.625 14:18:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.625 14:18:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.625 14:18:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.625 14:18:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.625 14:18:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.625 14:18:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.625 14:18:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.625 14:18:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.625 14:18:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.625 14:18:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.625 14:18:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:12:51.884 14:18:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:12:51.884 14:18:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.884 14:18:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.884 14:18:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.884 14:18:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.884 14:18:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.884 14:18:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.884 14:18:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.884 14:18:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.884 14:18:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.884 14:18:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.884 14:18:57 -- paths/export.sh@5 -- # export PATH 00:12:51.884 14:18:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.884 14:18:57 -- nvmf/common.sh@46 -- # : 0 00:12:51.884 14:18:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:51.884 14:18:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:51.884 14:18:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:51.884 14:18:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.884 14:18:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.884 14:18:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:51.884 14:18:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:51.884 14:18:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:51.884 14:18:57 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:51.884 14:18:57 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:51.884 14:18:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:51.884 14:18:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.884 14:18:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:51.884 14:18:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:51.884 14:18:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:51.884 14:18:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.884 14:18:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.884 14:18:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.884 14:18:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:51.884 14:18:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:51.884 14:18:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:51.884 14:18:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:51.884 14:18:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:51.884 14:18:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:51.884 14:18:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.884 14:18:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.884 14:18:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:51.884 14:18:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:51.884 14:18:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:51.884 14:18:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:51.884 14:18:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:51.884 14:18:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.884 14:18:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:51.884 14:18:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:51.884 14:18:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:51.884 14:18:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:51.884 14:18:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:51.884 14:18:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:51.884 Cannot find device "nvmf_tgt_br" 00:12:51.884 14:18:57 -- nvmf/common.sh@154 -- # true 00:12:51.884 14:18:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:51.884 Cannot find device "nvmf_tgt_br2" 00:12:51.884 14:18:57 -- nvmf/common.sh@155 -- # true 00:12:51.884 14:18:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:51.884 14:18:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:51.884 Cannot find device "nvmf_tgt_br" 00:12:51.884 14:18:57 -- nvmf/common.sh@157 -- # true 00:12:51.884 14:18:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:51.884 Cannot find device "nvmf_tgt_br2" 00:12:51.884 14:18:57 -- nvmf/common.sh@158 -- # true 00:12:51.884 14:18:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:51.884 14:18:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:51.884 14:18:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:51.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.884 14:18:57 -- nvmf/common.sh@161 -- # true 00:12:51.884 14:18:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:51.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.884 14:18:57 -- nvmf/common.sh@162 -- # true 00:12:51.884 14:18:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:51.884 14:18:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:51.884 14:18:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:51.884 14:18:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:51.884 14:18:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:51.884 14:18:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:51.884 14:18:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:51.884 14:18:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:51.884 14:18:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:51.884 14:18:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:51.884 14:18:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:51.884 14:18:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:51.884 14:18:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:51.884 14:18:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:51.884 14:18:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:51.884 14:18:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.142 14:18:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:52.142 14:18:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:52.142 14:18:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.142 14:18:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.142 14:18:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.142 14:18:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.142 14:18:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.142 14:18:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:52.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:52.142 00:12:52.142 --- 10.0.0.2 ping statistics --- 00:12:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.142 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:52.142 14:18:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:52.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:52.142 00:12:52.142 --- 10.0.0.3 ping statistics --- 00:12:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.142 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:52.142 14:18:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:52.142 00:12:52.142 --- 10.0.0.1 ping statistics --- 00:12:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.142 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:52.142 14:18:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.142 14:18:57 -- nvmf/common.sh@421 -- # return 0 00:12:52.142 14:18:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.142 14:18:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.142 14:18:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.142 14:18:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.142 14:18:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.142 14:18:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.142 14:18:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.142 14:18:57 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:52.142 14:18:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.142 14:18:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.142 14:18:57 -- common/autotest_common.sh@10 -- # set +x 00:12:52.142 14:18:57 -- nvmf/common.sh@469 -- # nvmfpid=77776 00:12:52.142 14:18:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.142 14:18:57 -- nvmf/common.sh@470 -- # waitforlisten 77776 00:12:52.142 14:18:57 -- common/autotest_common.sh@829 -- # '[' -z 77776 ']' 00:12:52.142 14:18:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.142 14:18:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.142 14:18:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.142 14:18:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.142 14:18:57 -- common/autotest_common.sh@10 -- # set +x 00:12:52.142 [2024-12-05 14:18:57.705610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.142 [2024-12-05 14:18:57.705697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.401 [2024-12-05 14:18:57.844588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.401 [2024-12-05 14:18:57.903452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.401 [2024-12-05 14:18:57.903606] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.401 [2024-12-05 14:18:57.903621] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.401 [2024-12-05 14:18:57.903630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.401 [2024-12-05 14:18:57.903901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.401 [2024-12-05 14:18:57.903951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.401 [2024-12-05 14:18:57.904362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.401 [2024-12-05 14:18:57.904368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.338 14:18:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.338 14:18:58 -- common/autotest_common.sh@862 -- # return 0 00:12:53.338 14:18:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.338 14:18:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.338 14:18:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.338 14:18:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.338 14:18:58 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.338 14:18:58 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.338 14:18:58 -- target/multitarget.sh@21 -- # jq length 00:12:53.338 14:18:58 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:53.338 14:18:58 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:53.597 "nvmf_tgt_1" 00:12:53.597 14:18:59 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:53.597 "nvmf_tgt_2" 00:12:53.597 14:18:59 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.597 14:18:59 -- target/multitarget.sh@28 -- # jq length 00:12:53.856 14:18:59 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:53.856 14:18:59 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:53.856 true 00:12:53.856 14:18:59 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:54.115 true 00:12:54.115 14:18:59 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:54.115 14:18:59 -- target/multitarget.sh@35 -- # jq length 00:12:54.115 14:18:59 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:54.115 14:18:59 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:54.115 14:18:59 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:54.115 14:18:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:54.115 14:18:59 -- nvmf/common.sh@116 -- # sync 00:12:54.115 14:18:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:54.115 14:18:59 -- nvmf/common.sh@119 -- # set +e 00:12:54.115 14:18:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:54.115 14:18:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:54.115 rmmod nvme_tcp 00:12:54.115 rmmod nvme_fabrics 00:12:54.374 rmmod nvme_keyring 00:12:54.374 14:18:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:54.374 14:18:59 -- nvmf/common.sh@123 -- # set -e 00:12:54.374 14:18:59 -- nvmf/common.sh@124 -- # return 0 00:12:54.374 14:18:59 -- nvmf/common.sh@477 -- # '[' -n 77776 ']' 00:12:54.374 14:18:59 -- nvmf/common.sh@478 -- # killprocess 77776 00:12:54.374 14:18:59 -- common/autotest_common.sh@936 -- # '[' -z 77776 ']' 00:12:54.374 14:18:59 -- common/autotest_common.sh@940 -- # kill -0 77776 00:12:54.374 14:18:59 -- common/autotest_common.sh@941 -- # uname 00:12:54.374 14:18:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:54.374 14:18:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77776 00:12:54.374 killing process with pid 77776 00:12:54.374 14:18:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:54.374 14:18:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:54.374 14:18:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77776' 00:12:54.374 14:18:59 -- common/autotest_common.sh@955 -- # kill 77776 00:12:54.374 14:18:59 -- common/autotest_common.sh@960 -- # wait 77776 00:12:54.374 14:19:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:54.374 14:19:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:54.374 14:19:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:54.374 14:19:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.374 14:19:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:54.374 14:19:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.374 14:19:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.374 14:19:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.633 14:19:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:54.633 00:12:54.633 real 0m2.947s 00:12:54.633 user 0m9.685s 00:12:54.633 sys 0m0.713s 00:12:54.633 14:19:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:54.633 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.633 ************************************ 00:12:54.633 END TEST nvmf_multitarget 00:12:54.633 ************************************ 00:12:54.633 14:19:00 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:54.633 14:19:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:54.633 14:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.633 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.633 ************************************ 00:12:54.633 START TEST nvmf_rpc 00:12:54.633 ************************************ 00:12:54.633 14:19:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:54.633 * Looking for test storage... 00:12:54.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.633 14:19:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:54.633 14:19:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:54.633 14:19:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:54.633 14:19:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:54.633 14:19:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:54.633 14:19:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:54.633 14:19:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:54.633 14:19:00 -- scripts/common.sh@335 -- # IFS=.-: 00:12:54.633 14:19:00 -- scripts/common.sh@335 -- # read -ra ver1 00:12:54.633 14:19:00 -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.633 14:19:00 -- scripts/common.sh@336 -- # read -ra ver2 00:12:54.633 14:19:00 -- scripts/common.sh@337 -- # local 'op=<' 00:12:54.633 14:19:00 -- scripts/common.sh@339 -- # ver1_l=2 00:12:54.633 14:19:00 -- scripts/common.sh@340 -- # ver2_l=1 00:12:54.633 14:19:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:54.633 14:19:00 -- scripts/common.sh@343 -- # case "$op" in 00:12:54.633 14:19:00 -- scripts/common.sh@344 -- # : 1 00:12:54.633 14:19:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:54.633 14:19:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.633 14:19:00 -- scripts/common.sh@364 -- # decimal 1 00:12:54.633 14:19:00 -- scripts/common.sh@352 -- # local d=1 00:12:54.633 14:19:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.633 14:19:00 -- scripts/common.sh@354 -- # echo 1 00:12:54.633 14:19:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:54.633 14:19:00 -- scripts/common.sh@365 -- # decimal 2 00:12:54.633 14:19:00 -- scripts/common.sh@352 -- # local d=2 00:12:54.633 14:19:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.633 14:19:00 -- scripts/common.sh@354 -- # echo 2 00:12:54.633 14:19:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:54.633 14:19:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:54.633 14:19:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:54.633 14:19:00 -- scripts/common.sh@367 -- # return 0 00:12:54.633 14:19:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.633 14:19:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:54.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.633 --rc genhtml_branch_coverage=1 00:12:54.633 --rc genhtml_function_coverage=1 00:12:54.633 --rc genhtml_legend=1 00:12:54.633 --rc geninfo_all_blocks=1 00:12:54.633 --rc geninfo_unexecuted_blocks=1 00:12:54.633 00:12:54.633 ' 00:12:54.633 14:19:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:54.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.633 --rc genhtml_branch_coverage=1 00:12:54.633 --rc genhtml_function_coverage=1 00:12:54.633 --rc genhtml_legend=1 00:12:54.633 --rc geninfo_all_blocks=1 00:12:54.633 --rc geninfo_unexecuted_blocks=1 00:12:54.633 00:12:54.633 ' 00:12:54.633 14:19:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:54.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.633 --rc genhtml_branch_coverage=1 00:12:54.633 --rc genhtml_function_coverage=1 00:12:54.633 --rc genhtml_legend=1 00:12:54.633 --rc geninfo_all_blocks=1 00:12:54.633 --rc geninfo_unexecuted_blocks=1 00:12:54.633 00:12:54.633 ' 00:12:54.633 14:19:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:54.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.633 --rc genhtml_branch_coverage=1 00:12:54.633 --rc genhtml_function_coverage=1 00:12:54.633 --rc genhtml_legend=1 00:12:54.633 --rc geninfo_all_blocks=1 00:12:54.633 --rc geninfo_unexecuted_blocks=1 00:12:54.633 00:12:54.633 ' 00:12:54.633 14:19:00 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.633 14:19:00 -- nvmf/common.sh@7 -- # uname -s 00:12:54.633 14:19:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.633 14:19:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.633 14:19:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.633 14:19:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.633 14:19:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.633 14:19:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.633 14:19:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.633 14:19:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.633 14:19:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.633 14:19:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.633 14:19:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:12:54.633 14:19:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:12:54.633 14:19:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.633 14:19:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.633 14:19:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:54.633 14:19:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.893 14:19:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.893 14:19:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.893 14:19:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.893 14:19:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.893 14:19:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.893 14:19:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.893 14:19:00 -- paths/export.sh@5 -- # export PATH 00:12:54.893 14:19:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.893 14:19:00 -- nvmf/common.sh@46 -- # : 0 00:12:54.893 14:19:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:54.893 14:19:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:54.893 14:19:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:54.893 14:19:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.893 14:19:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.893 14:19:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:54.893 14:19:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:54.893 14:19:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:54.893 14:19:00 -- target/rpc.sh@11 -- # loops=5 00:12:54.893 14:19:00 -- target/rpc.sh@23 -- # nvmftestinit 00:12:54.893 14:19:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:54.893 14:19:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.893 14:19:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:54.893 14:19:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:54.893 14:19:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:54.893 14:19:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.893 14:19:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.893 14:19:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.893 14:19:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:54.893 14:19:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:54.893 14:19:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:54.893 14:19:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:54.893 14:19:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:54.893 14:19:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:54.893 14:19:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.893 14:19:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.893 14:19:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:54.893 14:19:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:54.893 14:19:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:54.893 14:19:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:54.893 14:19:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:54.893 14:19:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.893 14:19:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:54.893 14:19:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:54.893 14:19:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:54.893 14:19:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:54.893 14:19:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:54.893 14:19:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:54.893 Cannot find device "nvmf_tgt_br" 00:12:54.893 14:19:00 -- nvmf/common.sh@154 -- # true 00:12:54.893 14:19:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:54.893 Cannot find device "nvmf_tgt_br2" 00:12:54.893 14:19:00 -- nvmf/common.sh@155 -- # true 00:12:54.893 14:19:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:54.893 14:19:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:54.893 Cannot find device "nvmf_tgt_br" 00:12:54.893 14:19:00 -- nvmf/common.sh@157 -- # true 00:12:54.893 14:19:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:54.893 Cannot find device "nvmf_tgt_br2" 00:12:54.893 14:19:00 -- nvmf/common.sh@158 -- # true 00:12:54.893 14:19:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:54.893 14:19:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:54.893 14:19:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:54.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.893 14:19:00 -- nvmf/common.sh@161 -- # true 00:12:54.893 14:19:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:54.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.893 14:19:00 -- nvmf/common.sh@162 -- # true 00:12:54.893 14:19:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:54.893 14:19:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:54.893 14:19:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:54.893 14:19:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:54.893 14:19:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:54.893 14:19:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:54.893 14:19:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:54.893 14:19:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:54.893 14:19:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:54.893 14:19:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:54.893 14:19:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:54.893 14:19:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:54.893 14:19:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:54.893 14:19:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:54.893 14:19:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:54.893 14:19:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:54.893 14:19:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:54.893 14:19:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:55.152 14:19:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:55.152 14:19:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:55.152 14:19:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:55.152 14:19:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:55.152 14:19:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:55.152 14:19:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:55.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:55.152 00:12:55.152 --- 10.0.0.2 ping statistics --- 00:12:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.152 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:55.152 14:19:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:55.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:55.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:12:55.152 00:12:55.152 --- 10.0.0.3 ping statistics --- 00:12:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.152 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:55.152 14:19:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:55.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:55.152 00:12:55.152 --- 10.0.0.1 ping statistics --- 00:12:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.152 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:55.152 14:19:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.153 14:19:00 -- nvmf/common.sh@421 -- # return 0 00:12:55.153 14:19:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:55.153 14:19:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.153 14:19:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:55.153 14:19:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:55.153 14:19:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.153 14:19:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:55.153 14:19:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:55.153 14:19:00 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:55.153 14:19:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:55.153 14:19:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:55.153 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:12:55.153 14:19:00 -- nvmf/common.sh@469 -- # nvmfpid=78016 00:12:55.153 14:19:00 -- nvmf/common.sh@470 -- # waitforlisten 78016 00:12:55.153 14:19:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.153 14:19:00 -- common/autotest_common.sh@829 -- # '[' -z 78016 ']' 00:12:55.153 14:19:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.153 14:19:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.153 14:19:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.153 14:19:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.153 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:12:55.153 [2024-12-05 14:19:00.673040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:55.153 [2024-12-05 14:19:00.673101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.412 [2024-12-05 14:19:00.801477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.412 [2024-12-05 14:19:00.859983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.412 [2024-12-05 14:19:00.860163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.412 [2024-12-05 14:19:00.860180] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.412 [2024-12-05 14:19:00.860191] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.412 [2024-12-05 14:19:00.860308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.412 [2024-12-05 14:19:00.860470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.412 [2024-12-05 14:19:00.861295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.413 [2024-12-05 14:19:00.861340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.351 14:19:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.351 14:19:01 -- common/autotest_common.sh@862 -- # return 0 00:12:56.351 14:19:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:56.351 14:19:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:56.351 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.351 14:19:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.351 14:19:01 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:56.351 14:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.351 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.351 14:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.351 14:19:01 -- target/rpc.sh@26 -- # stats='{ 00:12:56.351 "poll_groups": [ 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_0", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [] 00:12:56.351 }, 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_1", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [] 00:12:56.351 }, 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_2", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [] 00:12:56.351 }, 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_3", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [] 00:12:56.351 } 00:12:56.351 ], 00:12:56.351 "tick_rate": 2200000000 00:12:56.351 }' 00:12:56.351 14:19:01 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:56.351 14:19:01 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:56.351 14:19:01 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:56.351 14:19:01 -- target/rpc.sh@15 -- # wc -l 00:12:56.351 14:19:01 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:56.351 14:19:01 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:56.351 14:19:01 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:56.351 14:19:01 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.351 14:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.351 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.351 [2024-12-05 14:19:01.817304] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.351 14:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.351 14:19:01 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:56.351 14:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.351 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.351 14:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.351 14:19:01 -- target/rpc.sh@33 -- # stats='{ 00:12:56.351 "poll_groups": [ 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_0", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [ 00:12:56.351 { 00:12:56.351 "trtype": "TCP" 00:12:56.351 } 00:12:56.351 ] 00:12:56.351 }, 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_1", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [ 00:12:56.351 { 00:12:56.351 "trtype": "TCP" 00:12:56.351 } 00:12:56.351 ] 00:12:56.351 }, 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_2", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [ 00:12:56.351 { 00:12:56.351 "trtype": "TCP" 00:12:56.351 } 00:12:56.351 ] 00:12:56.351 }, 00:12:56.351 { 00:12:56.351 "admin_qpairs": 0, 00:12:56.351 "completed_nvme_io": 0, 00:12:56.351 "current_admin_qpairs": 0, 00:12:56.351 "current_io_qpairs": 0, 00:12:56.351 "io_qpairs": 0, 00:12:56.351 "name": "nvmf_tgt_poll_group_3", 00:12:56.351 "pending_bdev_io": 0, 00:12:56.351 "transports": [ 00:12:56.351 { 00:12:56.351 "trtype": "TCP" 00:12:56.351 } 00:12:56.351 ] 00:12:56.351 } 00:12:56.351 ], 00:12:56.351 "tick_rate": 2200000000 00:12:56.351 }' 00:12:56.351 14:19:01 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:56.351 14:19:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:56.351 14:19:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:56.351 14:19:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:56.351 14:19:01 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:56.352 14:19:01 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:56.352 14:19:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:56.352 14:19:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:56.352 14:19:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:56.352 14:19:01 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:56.352 14:19:01 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:56.352 14:19:01 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:56.352 14:19:01 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:56.352 14:19:01 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:56.352 14:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.352 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.352 Malloc1 00:12:56.352 14:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.352 14:19:01 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.352 14:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.352 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 14:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.611 14:19:01 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:56.611 14:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.611 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 14:19:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.611 14:19:02 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:56.611 14:19:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.611 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 14:19:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.611 14:19:02 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.611 14:19:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.611 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 [2024-12-05 14:19:02.021363] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.611 14:19:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.611 14:19:02 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c -a 10.0.0.2 -s 4420 00:12:56.611 14:19:02 -- common/autotest_common.sh@650 -- # local es=0 00:12:56.611 14:19:02 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c -a 10.0.0.2 -s 4420 00:12:56.611 14:19:02 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:56.611 14:19:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.611 14:19:02 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:56.611 14:19:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.611 14:19:02 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:56.611 14:19:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.611 14:19:02 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:56.611 14:19:02 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:56.611 14:19:02 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c -a 10.0.0.2 -s 4420 00:12:56.611 [2024-12-05 14:19:02.046067] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c' 00:12:56.611 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:56.611 could not add new controller: failed to write to nvme-fabrics device 00:12:56.611 14:19:02 -- common/autotest_common.sh@653 -- # es=1 00:12:56.611 14:19:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:56.611 14:19:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:56.611 14:19:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:56.611 14:19:02 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:12:56.611 14:19:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.611 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 14:19:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.611 14:19:02 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.611 14:19:02 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.611 14:19:02 -- common/autotest_common.sh@1187 -- # local i=0 00:12:56.611 14:19:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.611 14:19:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:56.611 14:19:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:59.148 14:19:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:59.148 14:19:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:59.148 14:19:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.148 14:19:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:59.148 14:19:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.148 14:19:04 -- common/autotest_common.sh@1197 -- # return 0 00:12:59.148 14:19:04 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.148 14:19:04 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.148 14:19:04 -- common/autotest_common.sh@1208 -- # local i=0 00:12:59.148 14:19:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:59.148 14:19:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.148 14:19:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.148 14:19:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:59.148 14:19:04 -- common/autotest_common.sh@1220 -- # return 0 00:12:59.148 14:19:04 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:12:59.148 14:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 14:19:04 -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 14:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 14:19:04 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.148 14:19:04 -- common/autotest_common.sh@650 -- # local es=0 00:12:59.148 14:19:04 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.148 14:19:04 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:59.148 14:19:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.148 14:19:04 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:59.148 14:19:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.148 14:19:04 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:59.148 14:19:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.148 14:19:04 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:59.148 14:19:04 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:59.148 14:19:04 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.148 [2024-12-05 14:19:04.467068] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c' 00:12:59.148 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:59.148 could not add new controller: failed to write to nvme-fabrics device 00:12:59.148 14:19:04 -- common/autotest_common.sh@653 -- # es=1 00:12:59.148 14:19:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:59.148 14:19:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:59.148 14:19:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:59.148 14:19:04 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:59.148 14:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 14:19:04 -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 14:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 14:19:04 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.148 14:19:04 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.148 14:19:04 -- common/autotest_common.sh@1187 -- # local i=0 00:12:59.148 14:19:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.148 14:19:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:59.148 14:19:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:01.049 14:19:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:01.049 14:19:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:01.049 14:19:06 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.049 14:19:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:01.049 14:19:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.049 14:19:06 -- common/autotest_common.sh@1197 -- # return 0 00:13:01.049 14:19:06 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.307 14:19:06 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.307 14:19:06 -- common/autotest_common.sh@1208 -- # local i=0 00:13:01.307 14:19:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:01.307 14:19:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.307 14:19:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:01.307 14:19:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.307 14:19:06 -- common/autotest_common.sh@1220 -- # return 0 00:13:01.307 14:19:06 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.307 14:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.307 14:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.307 14:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.307 14:19:06 -- target/rpc.sh@81 -- # seq 1 5 00:13:01.307 14:19:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.307 14:19:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.307 14:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.307 14:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.307 14:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.307 14:19:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.307 14:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.307 14:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.307 [2024-12-05 14:19:06.859400] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.307 14:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.307 14:19:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.307 14:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.307 14:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.307 14:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.307 14:19:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.307 14:19:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.307 14:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.307 14:19:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.307 14:19:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.564 14:19:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.564 14:19:07 -- common/autotest_common.sh@1187 -- # local i=0 00:13:01.564 14:19:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.564 14:19:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:01.564 14:19:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:03.464 14:19:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:03.464 14:19:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:03.464 14:19:09 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.464 14:19:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:03.464 14:19:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.464 14:19:09 -- common/autotest_common.sh@1197 -- # return 0 00:13:03.464 14:19:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.723 14:19:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.723 14:19:09 -- common/autotest_common.sh@1208 -- # local i=0 00:13:03.723 14:19:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:03.723 14:19:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.724 14:19:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:03.724 14:19:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.724 14:19:09 -- common/autotest_common.sh@1220 -- # return 0 00:13:03.724 14:19:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.724 14:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.724 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.724 14:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.724 14:19:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.724 14:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.724 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.724 14:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.724 14:19:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.724 14:19:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.724 14:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.724 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.724 14:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.724 14:19:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.724 14:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.724 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.724 [2024-12-05 14:19:09.283943] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.724 14:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.724 14:19:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.724 14:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.724 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.724 14:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.724 14:19:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.724 14:19:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.724 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.724 14:19:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.724 14:19:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.982 14:19:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.982 14:19:09 -- common/autotest_common.sh@1187 -- # local i=0 00:13:03.982 14:19:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.982 14:19:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:03.982 14:19:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:05.886 14:19:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:05.886 14:19:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:05.886 14:19:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.886 14:19:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:05.886 14:19:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.886 14:19:11 -- common/autotest_common.sh@1197 -- # return 0 00:13:05.886 14:19:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.146 14:19:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.146 14:19:11 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.146 14:19:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.146 14:19:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.146 14:19:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.146 14:19:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.146 14:19:11 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.146 14:19:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.146 14:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.146 14:19:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 14:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.146 14:19:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.146 14:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.146 14:19:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 14:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.146 14:19:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.146 14:19:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.146 14:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.146 14:19:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 14:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.146 14:19:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.146 14:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.146 14:19:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 [2024-12-05 14:19:11.592251] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.146 14:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.146 14:19:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.146 14:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.146 14:19:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 14:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.146 14:19:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.146 14:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.146 14:19:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 14:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.146 14:19:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.146 14:19:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.146 14:19:11 -- common/autotest_common.sh@1187 -- # local i=0 00:13:06.146 14:19:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.146 14:19:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:06.146 14:19:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:08.680 14:19:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:08.680 14:19:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:08.680 14:19:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.680 14:19:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:08.680 14:19:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.680 14:19:13 -- common/autotest_common.sh@1197 -- # return 0 00:13:08.680 14:19:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.680 14:19:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.680 14:19:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:08.680 14:19:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.680 14:19:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:08.680 14:19:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:08.680 14:19:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.680 14:19:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:08.680 14:19:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.680 14:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 14:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 14:19:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.680 14:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 14:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 14:19:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.680 14:19:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.680 14:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 14:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 14:19:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.680 14:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 [2024-12-05 14:19:13.904347] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.680 14:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 14:19:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.680 14:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 14:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 14:19:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.680 14:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.680 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 14:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.680 14:19:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.680 14:19:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.680 14:19:14 -- common/autotest_common.sh@1187 -- # local i=0 00:13:08.680 14:19:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.680 14:19:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:08.680 14:19:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:10.579 14:19:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:10.579 14:19:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:10.579 14:19:16 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.579 14:19:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:10.579 14:19:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.579 14:19:16 -- common/autotest_common.sh@1197 -- # return 0 00:13:10.579 14:19:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.579 14:19:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.579 14:19:16 -- common/autotest_common.sh@1208 -- # local i=0 00:13:10.579 14:19:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:10.579 14:19:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.579 14:19:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:10.579 14:19:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.579 14:19:16 -- common/autotest_common.sh@1220 -- # return 0 00:13:10.579 14:19:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.579 14:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.579 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.579 14:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.579 14:19:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.579 14:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.579 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.579 14:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.579 14:19:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.579 14:19:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.579 14:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.579 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.579 14:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.579 14:19:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.580 14:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.580 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.580 [2024-12-05 14:19:16.216851] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.580 14:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.580 14:19:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.580 14:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.580 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.838 14:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.838 14:19:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.838 14:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.838 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.838 14:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.838 14:19:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.838 14:19:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.838 14:19:16 -- common/autotest_common.sh@1187 -- # local i=0 00:13:10.838 14:19:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.838 14:19:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:10.838 14:19:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:13.371 14:19:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:13.371 14:19:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:13.371 14:19:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:13.371 14:19:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.371 14:19:18 -- common/autotest_common.sh@1197 -- # return 0 00:13:13.371 14:19:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.371 14:19:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@1208 -- # local i=0 00:13:13.371 14:19:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:13.371 14:19:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:13.371 14:19:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@1220 -- # return 0 00:13:13.371 14:19:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@99 -- # seq 1 5 00:13:13.371 14:19:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.371 14:19:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 [2024-12-05 14:19:18.641221] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.371 14:19:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 [2024-12-05 14:19:18.689260] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.371 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.371 14:19:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.371 14:19:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.371 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 [2024-12-05 14:19:18.741350] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.372 14:19:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 [2024-12-05 14:19:18.789439] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.372 14:19:18 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 [2024-12-05 14:19:18.837507] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:13.372 14:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.372 14:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.372 14:19:18 -- target/rpc.sh@110 -- # stats='{ 00:13:13.372 "poll_groups": [ 00:13:13.372 { 00:13:13.372 "admin_qpairs": 2, 00:13:13.372 "completed_nvme_io": 68, 00:13:13.372 "current_admin_qpairs": 0, 00:13:13.372 "current_io_qpairs": 0, 00:13:13.372 "io_qpairs": 16, 00:13:13.372 "name": "nvmf_tgt_poll_group_0", 00:13:13.372 "pending_bdev_io": 0, 00:13:13.372 "transports": [ 00:13:13.372 { 00:13:13.372 "trtype": "TCP" 00:13:13.372 } 00:13:13.372 ] 00:13:13.372 }, 00:13:13.372 { 00:13:13.372 "admin_qpairs": 3, 00:13:13.372 "completed_nvme_io": 66, 00:13:13.372 "current_admin_qpairs": 0, 00:13:13.372 "current_io_qpairs": 0, 00:13:13.372 "io_qpairs": 17, 00:13:13.372 "name": "nvmf_tgt_poll_group_1", 00:13:13.372 "pending_bdev_io": 0, 00:13:13.372 "transports": [ 00:13:13.372 { 00:13:13.372 "trtype": "TCP" 00:13:13.372 } 00:13:13.372 ] 00:13:13.372 }, 00:13:13.372 { 00:13:13.372 "admin_qpairs": 1, 00:13:13.372 "completed_nvme_io": 118, 00:13:13.372 "current_admin_qpairs": 0, 00:13:13.372 "current_io_qpairs": 0, 00:13:13.372 "io_qpairs": 19, 00:13:13.372 "name": "nvmf_tgt_poll_group_2", 00:13:13.372 "pending_bdev_io": 0, 00:13:13.372 "transports": [ 00:13:13.372 { 00:13:13.372 "trtype": "TCP" 00:13:13.372 } 00:13:13.372 ] 00:13:13.372 }, 00:13:13.372 { 00:13:13.372 "admin_qpairs": 1, 00:13:13.372 "completed_nvme_io": 168, 00:13:13.372 "current_admin_qpairs": 0, 00:13:13.372 "current_io_qpairs": 0, 00:13:13.372 "io_qpairs": 18, 00:13:13.372 "name": "nvmf_tgt_poll_group_3", 00:13:13.372 "pending_bdev_io": 0, 00:13:13.372 "transports": [ 00:13:13.372 { 00:13:13.372 "trtype": "TCP" 00:13:13.372 } 00:13:13.372 ] 00:13:13.372 } 00:13:13.372 ], 00:13:13.372 "tick_rate": 2200000000 00:13:13.372 }' 00:13:13.372 14:19:18 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:13.372 14:19:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:13.372 14:19:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:13.372 14:19:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.372 14:19:18 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:13.372 14:19:18 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:13.372 14:19:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:13.372 14:19:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:13.372 14:19:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.372 14:19:18 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:13:13.372 14:19:18 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:13.372 14:19:18 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:13.372 14:19:19 -- target/rpc.sh@123 -- # nvmftestfini 00:13:13.372 14:19:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:13.372 14:19:19 -- nvmf/common.sh@116 -- # sync 00:13:13.631 14:19:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:13.632 14:19:19 -- nvmf/common.sh@119 -- # set +e 00:13:13.632 14:19:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:13.632 14:19:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:13.632 rmmod nvme_tcp 00:13:13.632 rmmod nvme_fabrics 00:13:13.632 rmmod nvme_keyring 00:13:13.632 14:19:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:13.632 14:19:19 -- nvmf/common.sh@123 -- # set -e 00:13:13.632 14:19:19 -- nvmf/common.sh@124 -- # return 0 00:13:13.632 14:19:19 -- nvmf/common.sh@477 -- # '[' -n 78016 ']' 00:13:13.632 14:19:19 -- nvmf/common.sh@478 -- # killprocess 78016 00:13:13.632 14:19:19 -- common/autotest_common.sh@936 -- # '[' -z 78016 ']' 00:13:13.632 14:19:19 -- common/autotest_common.sh@940 -- # kill -0 78016 00:13:13.632 14:19:19 -- common/autotest_common.sh@941 -- # uname 00:13:13.632 14:19:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:13.632 14:19:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78016 00:13:13.632 14:19:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:13.632 14:19:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:13.632 killing process with pid 78016 00:13:13.632 14:19:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78016' 00:13:13.632 14:19:19 -- common/autotest_common.sh@955 -- # kill 78016 00:13:13.632 14:19:19 -- common/autotest_common.sh@960 -- # wait 78016 00:13:13.908 14:19:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:13.908 14:19:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:13.908 14:19:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:13.908 14:19:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.908 14:19:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:13.908 14:19:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.908 14:19:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.908 14:19:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.908 14:19:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:13.908 00:13:13.908 real 0m19.309s 00:13:13.908 user 1m13.362s 00:13:13.908 sys 0m2.056s 00:13:13.908 14:19:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.908 14:19:19 -- common/autotest_common.sh@10 -- # set +x 00:13:13.908 ************************************ 00:13:13.908 END TEST nvmf_rpc 00:13:13.908 ************************************ 00:13:13.908 14:19:19 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:13.908 14:19:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:13.908 14:19:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.908 14:19:19 -- common/autotest_common.sh@10 -- # set +x 00:13:13.908 ************************************ 00:13:13.908 START TEST nvmf_invalid 00:13:13.908 ************************************ 00:13:13.908 14:19:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:13.908 * Looking for test storage... 00:13:13.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.908 14:19:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:14.208 14:19:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:14.208 14:19:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:14.208 14:19:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:14.208 14:19:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:14.208 14:19:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:14.208 14:19:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:14.208 14:19:19 -- scripts/common.sh@335 -- # IFS=.-: 00:13:14.208 14:19:19 -- scripts/common.sh@335 -- # read -ra ver1 00:13:14.208 14:19:19 -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.208 14:19:19 -- scripts/common.sh@336 -- # read -ra ver2 00:13:14.208 14:19:19 -- scripts/common.sh@337 -- # local 'op=<' 00:13:14.208 14:19:19 -- scripts/common.sh@339 -- # ver1_l=2 00:13:14.208 14:19:19 -- scripts/common.sh@340 -- # ver2_l=1 00:13:14.208 14:19:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:14.208 14:19:19 -- scripts/common.sh@343 -- # case "$op" in 00:13:14.208 14:19:19 -- scripts/common.sh@344 -- # : 1 00:13:14.208 14:19:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:14.208 14:19:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.208 14:19:19 -- scripts/common.sh@364 -- # decimal 1 00:13:14.208 14:19:19 -- scripts/common.sh@352 -- # local d=1 00:13:14.208 14:19:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.208 14:19:19 -- scripts/common.sh@354 -- # echo 1 00:13:14.208 14:19:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:14.208 14:19:19 -- scripts/common.sh@365 -- # decimal 2 00:13:14.208 14:19:19 -- scripts/common.sh@352 -- # local d=2 00:13:14.208 14:19:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.208 14:19:19 -- scripts/common.sh@354 -- # echo 2 00:13:14.208 14:19:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:14.208 14:19:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:14.208 14:19:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:14.208 14:19:19 -- scripts/common.sh@367 -- # return 0 00:13:14.208 14:19:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.208 14:19:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.208 --rc genhtml_branch_coverage=1 00:13:14.208 --rc genhtml_function_coverage=1 00:13:14.208 --rc genhtml_legend=1 00:13:14.208 --rc geninfo_all_blocks=1 00:13:14.208 --rc geninfo_unexecuted_blocks=1 00:13:14.208 00:13:14.208 ' 00:13:14.208 14:19:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.208 --rc genhtml_branch_coverage=1 00:13:14.208 --rc genhtml_function_coverage=1 00:13:14.208 --rc genhtml_legend=1 00:13:14.208 --rc geninfo_all_blocks=1 00:13:14.208 --rc geninfo_unexecuted_blocks=1 00:13:14.208 00:13:14.208 ' 00:13:14.208 14:19:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.208 --rc genhtml_branch_coverage=1 00:13:14.208 --rc genhtml_function_coverage=1 00:13:14.208 --rc genhtml_legend=1 00:13:14.208 --rc geninfo_all_blocks=1 00:13:14.208 --rc geninfo_unexecuted_blocks=1 00:13:14.208 00:13:14.208 ' 00:13:14.208 14:19:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.208 --rc genhtml_branch_coverage=1 00:13:14.208 --rc genhtml_function_coverage=1 00:13:14.208 --rc genhtml_legend=1 00:13:14.208 --rc geninfo_all_blocks=1 00:13:14.208 --rc geninfo_unexecuted_blocks=1 00:13:14.208 00:13:14.208 ' 00:13:14.208 14:19:19 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.208 14:19:19 -- nvmf/common.sh@7 -- # uname -s 00:13:14.208 14:19:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.208 14:19:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.208 14:19:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.208 14:19:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.208 14:19:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.208 14:19:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.208 14:19:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.208 14:19:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.208 14:19:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.208 14:19:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.209 14:19:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:13:14.209 14:19:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:13:14.209 14:19:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.209 14:19:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.209 14:19:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:14.209 14:19:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.209 14:19:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.209 14:19:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.209 14:19:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.209 14:19:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.209 14:19:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.209 14:19:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.209 14:19:19 -- paths/export.sh@5 -- # export PATH 00:13:14.209 14:19:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.209 14:19:19 -- nvmf/common.sh@46 -- # : 0 00:13:14.209 14:19:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:14.209 14:19:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:14.209 14:19:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:14.209 14:19:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.209 14:19:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.209 14:19:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:14.209 14:19:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:14.209 14:19:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:14.209 14:19:19 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:14.209 14:19:19 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.209 14:19:19 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:14.209 14:19:19 -- target/invalid.sh@14 -- # target=foobar 00:13:14.209 14:19:19 -- target/invalid.sh@16 -- # RANDOM=0 00:13:14.209 14:19:19 -- target/invalid.sh@34 -- # nvmftestinit 00:13:14.209 14:19:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:14.209 14:19:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.209 14:19:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:14.209 14:19:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:14.209 14:19:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:14.209 14:19:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.209 14:19:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.209 14:19:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.209 14:19:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:14.209 14:19:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:14.209 14:19:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:14.209 14:19:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:14.209 14:19:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:14.209 14:19:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:14.209 14:19:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.209 14:19:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.209 14:19:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:14.209 14:19:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:14.209 14:19:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:14.209 14:19:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:14.209 14:19:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:14.209 14:19:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.209 14:19:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:14.209 14:19:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:14.209 14:19:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:14.209 14:19:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:14.209 14:19:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:14.209 14:19:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:14.209 Cannot find device "nvmf_tgt_br" 00:13:14.209 14:19:19 -- nvmf/common.sh@154 -- # true 00:13:14.209 14:19:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:14.209 Cannot find device "nvmf_tgt_br2" 00:13:14.209 14:19:19 -- nvmf/common.sh@155 -- # true 00:13:14.209 14:19:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:14.209 14:19:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:14.209 Cannot find device "nvmf_tgt_br" 00:13:14.209 14:19:19 -- nvmf/common.sh@157 -- # true 00:13:14.209 14:19:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:14.209 Cannot find device "nvmf_tgt_br2" 00:13:14.209 14:19:19 -- nvmf/common.sh@158 -- # true 00:13:14.209 14:19:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:14.209 14:19:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:14.209 14:19:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.209 14:19:19 -- nvmf/common.sh@161 -- # true 00:13:14.209 14:19:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:14.209 14:19:19 -- nvmf/common.sh@162 -- # true 00:13:14.209 14:19:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:14.209 14:19:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:14.209 14:19:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:14.209 14:19:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:14.209 14:19:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:14.483 14:19:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:14.483 14:19:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:14.483 14:19:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:14.483 14:19:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:14.483 14:19:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:14.483 14:19:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:14.483 14:19:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:14.483 14:19:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:14.483 14:19:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:14.483 14:19:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:14.483 14:19:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:14.483 14:19:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:14.483 14:19:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:14.483 14:19:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:14.483 14:19:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:14.483 14:19:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:14.483 14:19:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:14.483 14:19:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:14.483 14:19:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:14.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:13:14.483 00:13:14.483 --- 10.0.0.2 ping statistics --- 00:13:14.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.483 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:14.483 14:19:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:14.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:14.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:13:14.483 00:13:14.483 --- 10.0.0.3 ping statistics --- 00:13:14.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.483 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:14.483 14:19:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:14.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:14.483 00:13:14.483 --- 10.0.0.1 ping statistics --- 00:13:14.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.483 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:14.483 14:19:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.483 14:19:19 -- nvmf/common.sh@421 -- # return 0 00:13:14.483 14:19:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:14.483 14:19:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.483 14:19:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:14.483 14:19:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:14.483 14:19:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.483 14:19:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:14.483 14:19:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:14.483 14:19:20 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:14.483 14:19:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:14.483 14:19:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.483 14:19:20 -- common/autotest_common.sh@10 -- # set +x 00:13:14.483 14:19:20 -- nvmf/common.sh@469 -- # nvmfpid=78536 00:13:14.483 14:19:20 -- nvmf/common.sh@470 -- # waitforlisten 78536 00:13:14.483 14:19:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.483 14:19:20 -- common/autotest_common.sh@829 -- # '[' -z 78536 ']' 00:13:14.483 14:19:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.483 14:19:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.483 14:19:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.483 14:19:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.483 14:19:20 -- common/autotest_common.sh@10 -- # set +x 00:13:14.483 [2024-12-05 14:19:20.060680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:14.483 [2024-12-05 14:19:20.060745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.742 [2024-12-05 14:19:20.195351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.742 [2024-12-05 14:19:20.254128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.742 [2024-12-05 14:19:20.254268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.742 [2024-12-05 14:19:20.254282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.742 [2024-12-05 14:19:20.254291] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.742 [2024-12-05 14:19:20.254451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.742 [2024-12-05 14:19:20.254912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.742 [2024-12-05 14:19:20.255435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.742 [2024-12-05 14:19:20.255509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.306 14:19:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.306 14:19:20 -- common/autotest_common.sh@862 -- # return 0 00:13:15.306 14:19:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:15.306 14:19:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.306 14:19:20 -- common/autotest_common.sh@10 -- # set +x 00:13:15.563 14:19:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.563 14:19:20 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:15.563 14:19:20 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4049 00:13:15.563 [2024-12-05 14:19:21.166084] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:15.563 14:19:21 -- target/invalid.sh@40 -- # out='2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4049 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:15.563 request: 00:13:15.563 { 00:13:15.563 "method": "nvmf_create_subsystem", 00:13:15.563 "params": { 00:13:15.563 "nqn": "nqn.2016-06.io.spdk:cnode4049", 00:13:15.563 "tgt_name": "foobar" 00:13:15.563 } 00:13:15.563 } 00:13:15.563 Got JSON-RPC error response 00:13:15.563 GoRPCClient: error on JSON-RPC call' 00:13:15.563 14:19:21 -- target/invalid.sh@41 -- # [[ 2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4049 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:15.563 request: 00:13:15.563 { 00:13:15.563 "method": "nvmf_create_subsystem", 00:13:15.563 "params": { 00:13:15.563 "nqn": "nqn.2016-06.io.spdk:cnode4049", 00:13:15.563 "tgt_name": "foobar" 00:13:15.563 } 00:13:15.563 } 00:13:15.563 Got JSON-RPC error response 00:13:15.563 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:15.563 14:19:21 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:15.563 14:19:21 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13413 00:13:15.820 [2024-12-05 14:19:21.446490] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13413: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:16.078 14:19:21 -- target/invalid.sh@45 -- # out='2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13413 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:16.078 request: 00:13:16.078 { 00:13:16.078 "method": "nvmf_create_subsystem", 00:13:16.078 "params": { 00:13:16.078 "nqn": "nqn.2016-06.io.spdk:cnode13413", 00:13:16.078 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:16.078 } 00:13:16.078 } 00:13:16.078 Got JSON-RPC error response 00:13:16.078 GoRPCClient: error on JSON-RPC call' 00:13:16.078 14:19:21 -- target/invalid.sh@46 -- # [[ 2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13413 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:16.078 request: 00:13:16.078 { 00:13:16.078 "method": "nvmf_create_subsystem", 00:13:16.078 "params": { 00:13:16.078 "nqn": "nqn.2016-06.io.spdk:cnode13413", 00:13:16.078 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:16.078 } 00:13:16.078 } 00:13:16.078 Got JSON-RPC error response 00:13:16.078 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.078 14:19:21 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:16.078 14:19:21 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13723 00:13:16.078 [2024-12-05 14:19:21.662734] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13723: invalid model number 'SPDK_Controller' 00:13:16.078 14:19:21 -- target/invalid.sh@50 -- # out='2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13723], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:16.078 request: 00:13:16.078 { 00:13:16.078 "method": "nvmf_create_subsystem", 00:13:16.078 "params": { 00:13:16.078 "nqn": "nqn.2016-06.io.spdk:cnode13723", 00:13:16.078 "model_number": "SPDK_Controller\u001f" 00:13:16.078 } 00:13:16.078 } 00:13:16.078 Got JSON-RPC error response 00:13:16.078 GoRPCClient: error on JSON-RPC call' 00:13:16.078 14:19:21 -- target/invalid.sh@51 -- # [[ 2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13723], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:16.078 request: 00:13:16.078 { 00:13:16.078 "method": "nvmf_create_subsystem", 00:13:16.078 "params": { 00:13:16.078 "nqn": "nqn.2016-06.io.spdk:cnode13723", 00:13:16.078 "model_number": "SPDK_Controller\u001f" 00:13:16.078 } 00:13:16.078 } 00:13:16.078 Got JSON-RPC error response 00:13:16.078 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:16.078 14:19:21 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:16.078 14:19:21 -- target/invalid.sh@19 -- # local length=21 ll 00:13:16.078 14:19:21 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:16.078 14:19:21 -- target/invalid.sh@21 -- # local chars 00:13:16.078 14:19:21 -- target/invalid.sh@22 -- # local string 00:13:16.078 14:19:21 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:16.078 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # printf %x 60 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # string+='<' 00:13:16.078 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.078 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # printf %x 38 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # string+='&' 00:13:16.078 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.078 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # printf %x 96 00:13:16.078 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # string+='`' 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # printf %x 45 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # string+=- 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # printf %x 104 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # string+=h 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # printf %x 119 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # string+=w 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # printf %x 42 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:16.079 14:19:21 -- target/invalid.sh@25 -- # string+='*' 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.079 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # printf %x 55 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # string+=7 00:13:16.336 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.336 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # printf %x 96 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # string+='`' 00:13:16.336 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.336 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # printf %x 35 00:13:16.336 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+='#' 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 108 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=l 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 124 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+='|' 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 69 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=E 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 74 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=J 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 85 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=U 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 47 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=/ 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 95 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=_ 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 84 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=T 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 64 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=@ 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 89 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=Y 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # printf %x 87 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:16.337 14:19:21 -- target/invalid.sh@25 -- # string+=W 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.337 14:19:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.337 14:19:21 -- target/invalid.sh@28 -- # [[ < == \- ]] 00:13:16.337 14:19:21 -- target/invalid.sh@31 -- # echo '<&`-hw*7`#l|EJU/_T@YW' 00:13:16.337 14:19:21 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '<&`-hw*7`#l|EJU/_T@YW' nqn.2016-06.io.spdk:cnode13984 00:13:16.595 [2024-12-05 14:19:21.987268] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13984: invalid serial number '<&`-hw*7`#l|EJU/_T@YW' 00:13:16.595 14:19:22 -- target/invalid.sh@54 -- # out='2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13984 serial_number:<&`-hw*7`#l|EJU/_T@YW], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN <&`-hw*7`#l|EJU/_T@YW 00:13:16.595 request: 00:13:16.595 { 00:13:16.595 "method": "nvmf_create_subsystem", 00:13:16.595 "params": { 00:13:16.595 "nqn": "nqn.2016-06.io.spdk:cnode13984", 00:13:16.595 "serial_number": "<&`-hw*7`#l|EJU/_T@YW" 00:13:16.595 } 00:13:16.595 } 00:13:16.595 Got JSON-RPC error response 00:13:16.595 GoRPCClient: error on JSON-RPC call' 00:13:16.595 14:19:22 -- target/invalid.sh@55 -- # [[ 2024/12/05 14:19:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13984 serial_number:<&`-hw*7`#l|EJU/_T@YW], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN <&`-hw*7`#l|EJU/_T@YW 00:13:16.595 request: 00:13:16.595 { 00:13:16.595 "method": "nvmf_create_subsystem", 00:13:16.595 "params": { 00:13:16.595 "nqn": "nqn.2016-06.io.spdk:cnode13984", 00:13:16.595 "serial_number": "<&`-hw*7`#l|EJU/_T@YW" 00:13:16.596 } 00:13:16.596 } 00:13:16.596 Got JSON-RPC error response 00:13:16.596 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.596 14:19:22 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:16.596 14:19:22 -- target/invalid.sh@19 -- # local length=41 ll 00:13:16.596 14:19:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:16.596 14:19:22 -- target/invalid.sh@21 -- # local chars 00:13:16.596 14:19:22 -- target/invalid.sh@22 -- # local string 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 53 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=5 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 78 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=N 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 111 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=o 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 117 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=u 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 116 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=t 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 92 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+='\' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 116 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=t 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 95 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=_ 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 66 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=B 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 68 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=D 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 112 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=p 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 40 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+='(' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 100 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=d 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 47 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=/ 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 65 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=A 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 44 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=, 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 83 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=S 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 53 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=5 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 43 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=+ 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 35 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+='#' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 77 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=M 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 50 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=2 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 89 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=Y 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 103 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=g 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 124 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+='|' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 42 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+='*' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 85 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=U 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 94 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+='^' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 93 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=']' 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # printf %x 106 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:16.596 14:19:22 -- target/invalid.sh@25 -- # string+=j 00:13:16.596 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 44 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=, 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 110 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=n 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 45 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=- 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 33 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+='!' 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 127 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 84 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=T 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 58 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=: 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 57 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=9 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 90 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=Z 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 40 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+='(' 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # printf %x 68 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:16.597 14:19:22 -- target/invalid.sh@25 -- # string+=D 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.597 14:19:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.597 14:19:22 -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:13:16.597 14:19:22 -- target/invalid.sh@31 -- # echo '5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D' 00:13:16.597 14:19:22 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D' nqn.2016-06.io.spdk:cnode5964 00:13:16.854 [2024-12-05 14:19:22.496034] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5964: invalid model number '5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D' 00:13:17.112 14:19:22 -- target/invalid.sh@58 -- # out='2024/12/05 14:19:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D nqn:nqn.2016-06.io.spdk:cnode5964], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D 00:13:17.112 request: 00:13:17.112 { 00:13:17.112 "method": "nvmf_create_subsystem", 00:13:17.112 "params": { 00:13:17.112 "nqn": "nqn.2016-06.io.spdk:cnode5964", 00:13:17.112 "model_number": "5Nout\\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!\u007fT:9Z(D" 00:13:17.112 } 00:13:17.112 } 00:13:17.112 Got JSON-RPC error response 00:13:17.112 GoRPCClient: error on JSON-RPC call' 00:13:17.112 14:19:22 -- target/invalid.sh@59 -- # [[ 2024/12/05 14:19:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D nqn:nqn.2016-06.io.spdk:cnode5964], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 5Nout\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!T:9Z(D 00:13:17.112 request: 00:13:17.112 { 00:13:17.112 "method": "nvmf_create_subsystem", 00:13:17.112 "params": { 00:13:17.112 "nqn": "nqn.2016-06.io.spdk:cnode5964", 00:13:17.112 "model_number": "5Nout\\t_BDp(d/A,S5+#M2Yg|*U^]j,n-!\u007fT:9Z(D" 00:13:17.112 } 00:13:17.112 } 00:13:17.112 Got JSON-RPC error response 00:13:17.112 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:17.112 14:19:22 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:17.112 [2024-12-05 14:19:22.716519] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.112 14:19:22 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:17.370 14:19:22 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:17.370 14:19:22 -- target/invalid.sh@67 -- # echo '' 00:13:17.370 14:19:22 -- target/invalid.sh@67 -- # head -n 1 00:13:17.370 14:19:22 -- target/invalid.sh@67 -- # IP= 00:13:17.370 14:19:22 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:17.628 [2024-12-05 14:19:23.223780] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:17.628 14:19:23 -- target/invalid.sh@69 -- # out='2024/12/05 14:19:23 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:17.628 request: 00:13:17.628 { 00:13:17.628 "method": "nvmf_subsystem_remove_listener", 00:13:17.628 "params": { 00:13:17.628 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.628 "listen_address": { 00:13:17.628 "trtype": "tcp", 00:13:17.628 "traddr": "", 00:13:17.628 "trsvcid": "4421" 00:13:17.628 } 00:13:17.628 } 00:13:17.628 } 00:13:17.628 Got JSON-RPC error response 00:13:17.628 GoRPCClient: error on JSON-RPC call' 00:13:17.628 14:19:23 -- target/invalid.sh@70 -- # [[ 2024/12/05 14:19:23 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:17.628 request: 00:13:17.628 { 00:13:17.628 "method": "nvmf_subsystem_remove_listener", 00:13:17.628 "params": { 00:13:17.628 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.628 "listen_address": { 00:13:17.628 "trtype": "tcp", 00:13:17.628 "traddr": "", 00:13:17.628 "trsvcid": "4421" 00:13:17.628 } 00:13:17.628 } 00:13:17.628 } 00:13:17.628 Got JSON-RPC error response 00:13:17.628 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:17.628 14:19:23 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11060 -i 0 00:13:17.887 [2024-12-05 14:19:23.512094] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11060: invalid cntlid range [0-65519] 00:13:17.887 14:19:23 -- target/invalid.sh@73 -- # out='2024/12/05 14:19:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11060], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:17.887 request: 00:13:17.887 { 00:13:17.887 "method": "nvmf_create_subsystem", 00:13:17.887 "params": { 00:13:17.887 "nqn": "nqn.2016-06.io.spdk:cnode11060", 00:13:17.887 "min_cntlid": 0 00:13:17.887 } 00:13:17.887 } 00:13:17.887 Got JSON-RPC error response 00:13:17.887 GoRPCClient: error on JSON-RPC call' 00:13:17.887 14:19:23 -- target/invalid.sh@74 -- # [[ 2024/12/05 14:19:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11060], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:17.887 request: 00:13:17.887 { 00:13:17.887 "method": "nvmf_create_subsystem", 00:13:17.887 "params": { 00:13:17.887 "nqn": "nqn.2016-06.io.spdk:cnode11060", 00:13:17.887 "min_cntlid": 0 00:13:17.887 } 00:13:17.887 } 00:13:17.887 Got JSON-RPC error response 00:13:17.887 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.145 14:19:23 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4224 -i 65520 00:13:18.145 [2024-12-05 14:19:23.780545] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4224: invalid cntlid range [65520-65519] 00:13:18.403 14:19:23 -- target/invalid.sh@75 -- # out='2024/12/05 14:19:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4224], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:18.403 request: 00:13:18.403 { 00:13:18.403 "method": "nvmf_create_subsystem", 00:13:18.403 "params": { 00:13:18.403 "nqn": "nqn.2016-06.io.spdk:cnode4224", 00:13:18.403 "min_cntlid": 65520 00:13:18.403 } 00:13:18.403 } 00:13:18.403 Got JSON-RPC error response 00:13:18.403 GoRPCClient: error on JSON-RPC call' 00:13:18.403 14:19:23 -- target/invalid.sh@76 -- # [[ 2024/12/05 14:19:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4224], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:18.403 request: 00:13:18.403 { 00:13:18.403 "method": "nvmf_create_subsystem", 00:13:18.403 "params": { 00:13:18.403 "nqn": "nqn.2016-06.io.spdk:cnode4224", 00:13:18.403 "min_cntlid": 65520 00:13:18.403 } 00:13:18.403 } 00:13:18.403 Got JSON-RPC error response 00:13:18.403 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.403 14:19:23 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9894 -I 0 00:13:18.662 [2024-12-05 14:19:24.072998] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9894: invalid cntlid range [1-0] 00:13:18.662 14:19:24 -- target/invalid.sh@77 -- # out='2024/12/05 14:19:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9894], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:18.662 request: 00:13:18.662 { 00:13:18.662 "method": "nvmf_create_subsystem", 00:13:18.662 "params": { 00:13:18.662 "nqn": "nqn.2016-06.io.spdk:cnode9894", 00:13:18.662 "max_cntlid": 0 00:13:18.662 } 00:13:18.662 } 00:13:18.662 Got JSON-RPC error response 00:13:18.662 GoRPCClient: error on JSON-RPC call' 00:13:18.662 14:19:24 -- target/invalid.sh@78 -- # [[ 2024/12/05 14:19:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9894], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:18.662 request: 00:13:18.662 { 00:13:18.662 "method": "nvmf_create_subsystem", 00:13:18.662 "params": { 00:13:18.662 "nqn": "nqn.2016-06.io.spdk:cnode9894", 00:13:18.662 "max_cntlid": 0 00:13:18.662 } 00:13:18.662 } 00:13:18.662 Got JSON-RPC error response 00:13:18.662 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.662 14:19:24 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14696 -I 65520 00:13:18.662 [2024-12-05 14:19:24.285341] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14696: invalid cntlid range [1-65520] 00:13:18.946 14:19:24 -- target/invalid.sh@79 -- # out='2024/12/05 14:19:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14696], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:18.946 request: 00:13:18.946 { 00:13:18.946 "method": "nvmf_create_subsystem", 00:13:18.946 "params": { 00:13:18.946 "nqn": "nqn.2016-06.io.spdk:cnode14696", 00:13:18.946 "max_cntlid": 65520 00:13:18.946 } 00:13:18.946 } 00:13:18.946 Got JSON-RPC error response 00:13:18.946 GoRPCClient: error on JSON-RPC call' 00:13:18.946 14:19:24 -- target/invalid.sh@80 -- # [[ 2024/12/05 14:19:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14696], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:18.946 request: 00:13:18.946 { 00:13:18.946 "method": "nvmf_create_subsystem", 00:13:18.946 "params": { 00:13:18.946 "nqn": "nqn.2016-06.io.spdk:cnode14696", 00:13:18.946 "max_cntlid": 65520 00:13:18.946 } 00:13:18.946 } 00:13:18.946 Got JSON-RPC error response 00:13:18.946 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.946 14:19:24 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7411 -i 6 -I 5 00:13:18.946 [2024-12-05 14:19:24.497657] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7411: invalid cntlid range [6-5] 00:13:18.946 14:19:24 -- target/invalid.sh@83 -- # out='2024/12/05 14:19:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7411], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:18.946 request: 00:13:18.946 { 00:13:18.946 "method": "nvmf_create_subsystem", 00:13:18.946 "params": { 00:13:18.946 "nqn": "nqn.2016-06.io.spdk:cnode7411", 00:13:18.946 "min_cntlid": 6, 00:13:18.946 "max_cntlid": 5 00:13:18.946 } 00:13:18.946 } 00:13:18.946 Got JSON-RPC error response 00:13:18.946 GoRPCClient: error on JSON-RPC call' 00:13:18.946 14:19:24 -- target/invalid.sh@84 -- # [[ 2024/12/05 14:19:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7411], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:18.946 request: 00:13:18.946 { 00:13:18.946 "method": "nvmf_create_subsystem", 00:13:18.946 "params": { 00:13:18.946 "nqn": "nqn.2016-06.io.spdk:cnode7411", 00:13:18.946 "min_cntlid": 6, 00:13:18.946 "max_cntlid": 5 00:13:18.946 } 00:13:18.946 } 00:13:18.946 Got JSON-RPC error response 00:13:18.946 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.946 14:19:24 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:19.205 14:19:24 -- target/invalid.sh@87 -- # out='request: 00:13:19.205 { 00:13:19.205 "name": "foobar", 00:13:19.205 "method": "nvmf_delete_target", 00:13:19.205 "req_id": 1 00:13:19.205 } 00:13:19.205 Got JSON-RPC error response 00:13:19.205 response: 00:13:19.205 { 00:13:19.205 "code": -32602, 00:13:19.205 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:19.205 }' 00:13:19.205 14:19:24 -- target/invalid.sh@88 -- # [[ request: 00:13:19.205 { 00:13:19.205 "name": "foobar", 00:13:19.205 "method": "nvmf_delete_target", 00:13:19.205 "req_id": 1 00:13:19.205 } 00:13:19.205 Got JSON-RPC error response 00:13:19.205 response: 00:13:19.205 { 00:13:19.205 "code": -32602, 00:13:19.205 "message": "The specified target doesn't exist, cannot delete it." 00:13:19.205 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:19.205 14:19:24 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:19.205 14:19:24 -- target/invalid.sh@91 -- # nvmftestfini 00:13:19.205 14:19:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:19.205 14:19:24 -- nvmf/common.sh@116 -- # sync 00:13:19.205 14:19:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:19.205 14:19:24 -- nvmf/common.sh@119 -- # set +e 00:13:19.205 14:19:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:19.205 14:19:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:19.205 rmmod nvme_tcp 00:13:19.205 rmmod nvme_fabrics 00:13:19.205 rmmod nvme_keyring 00:13:19.205 14:19:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:19.205 14:19:24 -- nvmf/common.sh@123 -- # set -e 00:13:19.205 14:19:24 -- nvmf/common.sh@124 -- # return 0 00:13:19.205 14:19:24 -- nvmf/common.sh@477 -- # '[' -n 78536 ']' 00:13:19.205 14:19:24 -- nvmf/common.sh@478 -- # killprocess 78536 00:13:19.205 14:19:24 -- common/autotest_common.sh@936 -- # '[' -z 78536 ']' 00:13:19.205 14:19:24 -- common/autotest_common.sh@940 -- # kill -0 78536 00:13:19.205 14:19:24 -- common/autotest_common.sh@941 -- # uname 00:13:19.205 14:19:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.205 14:19:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78536 00:13:19.205 14:19:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:19.205 14:19:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:19.205 14:19:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78536' 00:13:19.205 killing process with pid 78536 00:13:19.205 14:19:24 -- common/autotest_common.sh@955 -- # kill 78536 00:13:19.205 14:19:24 -- common/autotest_common.sh@960 -- # wait 78536 00:13:19.464 14:19:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:19.464 14:19:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:19.464 14:19:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:19.464 14:19:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.464 14:19:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:19.464 14:19:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.464 14:19:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.464 14:19:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.464 14:19:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:19.464 00:13:19.464 real 0m5.557s 00:13:19.464 user 0m21.877s 00:13:19.464 sys 0m1.306s 00:13:19.464 14:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.464 14:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:19.464 ************************************ 00:13:19.464 END TEST nvmf_invalid 00:13:19.464 ************************************ 00:13:19.464 14:19:25 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:19.464 14:19:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:19.464 14:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.464 14:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:19.464 ************************************ 00:13:19.464 START TEST nvmf_abort 00:13:19.464 ************************************ 00:13:19.464 14:19:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:19.722 * Looking for test storage... 00:13:19.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:19.722 14:19:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:19.722 14:19:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:19.722 14:19:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:19.722 14:19:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:19.722 14:19:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:19.722 14:19:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:19.722 14:19:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:19.722 14:19:25 -- scripts/common.sh@335 -- # IFS=.-: 00:13:19.722 14:19:25 -- scripts/common.sh@335 -- # read -ra ver1 00:13:19.722 14:19:25 -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.722 14:19:25 -- scripts/common.sh@336 -- # read -ra ver2 00:13:19.722 14:19:25 -- scripts/common.sh@337 -- # local 'op=<' 00:13:19.722 14:19:25 -- scripts/common.sh@339 -- # ver1_l=2 00:13:19.722 14:19:25 -- scripts/common.sh@340 -- # ver2_l=1 00:13:19.722 14:19:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:19.722 14:19:25 -- scripts/common.sh@343 -- # case "$op" in 00:13:19.722 14:19:25 -- scripts/common.sh@344 -- # : 1 00:13:19.722 14:19:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:19.722 14:19:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.722 14:19:25 -- scripts/common.sh@364 -- # decimal 1 00:13:19.722 14:19:25 -- scripts/common.sh@352 -- # local d=1 00:13:19.722 14:19:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.722 14:19:25 -- scripts/common.sh@354 -- # echo 1 00:13:19.722 14:19:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:19.722 14:19:25 -- scripts/common.sh@365 -- # decimal 2 00:13:19.722 14:19:25 -- scripts/common.sh@352 -- # local d=2 00:13:19.722 14:19:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.722 14:19:25 -- scripts/common.sh@354 -- # echo 2 00:13:19.722 14:19:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:19.722 14:19:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:19.722 14:19:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:19.722 14:19:25 -- scripts/common.sh@367 -- # return 0 00:13:19.722 14:19:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.722 14:19:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:19.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.722 --rc genhtml_branch_coverage=1 00:13:19.722 --rc genhtml_function_coverage=1 00:13:19.722 --rc genhtml_legend=1 00:13:19.722 --rc geninfo_all_blocks=1 00:13:19.722 --rc geninfo_unexecuted_blocks=1 00:13:19.722 00:13:19.722 ' 00:13:19.722 14:19:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:19.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.722 --rc genhtml_branch_coverage=1 00:13:19.722 --rc genhtml_function_coverage=1 00:13:19.722 --rc genhtml_legend=1 00:13:19.722 --rc geninfo_all_blocks=1 00:13:19.722 --rc geninfo_unexecuted_blocks=1 00:13:19.722 00:13:19.722 ' 00:13:19.722 14:19:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:19.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.722 --rc genhtml_branch_coverage=1 00:13:19.722 --rc genhtml_function_coverage=1 00:13:19.722 --rc genhtml_legend=1 00:13:19.722 --rc geninfo_all_blocks=1 00:13:19.722 --rc geninfo_unexecuted_blocks=1 00:13:19.722 00:13:19.722 ' 00:13:19.722 14:19:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:19.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.722 --rc genhtml_branch_coverage=1 00:13:19.722 --rc genhtml_function_coverage=1 00:13:19.722 --rc genhtml_legend=1 00:13:19.722 --rc geninfo_all_blocks=1 00:13:19.722 --rc geninfo_unexecuted_blocks=1 00:13:19.722 00:13:19.722 ' 00:13:19.722 14:19:25 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:19.722 14:19:25 -- nvmf/common.sh@7 -- # uname -s 00:13:19.722 14:19:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.722 14:19:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.722 14:19:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.722 14:19:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.722 14:19:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.722 14:19:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.722 14:19:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.722 14:19:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.722 14:19:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.722 14:19:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.722 14:19:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:13:19.722 14:19:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:13:19.722 14:19:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.722 14:19:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.722 14:19:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:19.722 14:19:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.722 14:19:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.722 14:19:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.722 14:19:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.722 14:19:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.722 14:19:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.722 14:19:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.722 14:19:25 -- paths/export.sh@5 -- # export PATH 00:13:19.722 14:19:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.722 14:19:25 -- nvmf/common.sh@46 -- # : 0 00:13:19.722 14:19:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:19.722 14:19:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:19.722 14:19:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:19.722 14:19:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.722 14:19:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.722 14:19:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:19.722 14:19:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:19.722 14:19:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:19.722 14:19:25 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.722 14:19:25 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:19.722 14:19:25 -- target/abort.sh@14 -- # nvmftestinit 00:13:19.722 14:19:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:19.722 14:19:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.722 14:19:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:19.722 14:19:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:19.722 14:19:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:19.722 14:19:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.722 14:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.722 14:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.722 14:19:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:19.722 14:19:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:19.722 14:19:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:19.722 14:19:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:19.722 14:19:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:19.722 14:19:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:19.722 14:19:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.722 14:19:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.722 14:19:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:19.722 14:19:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:19.722 14:19:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:19.723 14:19:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:19.723 14:19:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:19.723 14:19:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.723 14:19:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:19.723 14:19:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:19.723 14:19:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:19.723 14:19:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:19.723 14:19:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:19.723 14:19:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:19.723 Cannot find device "nvmf_tgt_br" 00:13:19.723 14:19:25 -- nvmf/common.sh@154 -- # true 00:13:19.723 14:19:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.723 Cannot find device "nvmf_tgt_br2" 00:13:19.723 14:19:25 -- nvmf/common.sh@155 -- # true 00:13:19.723 14:19:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:19.723 14:19:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:19.723 Cannot find device "nvmf_tgt_br" 00:13:19.723 14:19:25 -- nvmf/common.sh@157 -- # true 00:13:19.723 14:19:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:19.723 Cannot find device "nvmf_tgt_br2" 00:13:19.723 14:19:25 -- nvmf/common.sh@158 -- # true 00:13:19.723 14:19:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:19.981 14:19:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:19.981 14:19:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.981 14:19:25 -- nvmf/common.sh@161 -- # true 00:13:19.981 14:19:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.981 14:19:25 -- nvmf/common.sh@162 -- # true 00:13:19.981 14:19:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:19.981 14:19:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:19.981 14:19:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:19.981 14:19:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:19.981 14:19:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:19.981 14:19:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:19.981 14:19:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:19.981 14:19:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:19.981 14:19:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:19.981 14:19:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:19.981 14:19:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:19.981 14:19:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:19.981 14:19:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:19.981 14:19:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:19.981 14:19:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:19.981 14:19:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:19.981 14:19:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:19.981 14:19:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:19.981 14:19:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:19.981 14:19:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:19.981 14:19:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:19.981 14:19:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:19.981 14:19:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:19.981 14:19:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:19.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:13:19.981 00:13:19.981 --- 10.0.0.2 ping statistics --- 00:13:19.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.981 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:19.981 14:19:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:19.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:19.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:19.981 00:13:19.981 --- 10.0.0.3 ping statistics --- 00:13:19.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.981 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:19.981 14:19:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:20.239 00:13:20.239 --- 10.0.0.1 ping statistics --- 00:13:20.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.239 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:20.239 14:19:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.239 14:19:25 -- nvmf/common.sh@421 -- # return 0 00:13:20.239 14:19:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.239 14:19:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.239 14:19:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.239 14:19:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.239 14:19:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.239 14:19:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.239 14:19:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.239 14:19:25 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:20.239 14:19:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.239 14:19:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.239 14:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:20.239 14:19:25 -- nvmf/common.sh@469 -- # nvmfpid=79045 00:13:20.239 14:19:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.239 14:19:25 -- nvmf/common.sh@470 -- # waitforlisten 79045 00:13:20.240 14:19:25 -- common/autotest_common.sh@829 -- # '[' -z 79045 ']' 00:13:20.240 14:19:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.240 14:19:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.240 14:19:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.240 14:19:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.240 14:19:25 -- common/autotest_common.sh@10 -- # set +x 00:13:20.240 [2024-12-05 14:19:25.712209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:20.240 [2024-12-05 14:19:25.712289] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.240 [2024-12-05 14:19:25.854221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.498 [2024-12-05 14:19:25.929365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.498 [2024-12-05 14:19:25.929785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.498 [2024-12-05 14:19:25.929914] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.498 [2024-12-05 14:19:25.930001] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.498 [2024-12-05 14:19:25.930221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.498 [2024-12-05 14:19:25.930859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.498 [2024-12-05 14:19:25.930866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.065 14:19:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.065 14:19:26 -- common/autotest_common.sh@862 -- # return 0 00:13:21.065 14:19:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:21.065 14:19:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.065 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.065 14:19:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.065 14:19:26 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:21.065 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.065 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.065 [2024-12-05 14:19:26.695735] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.065 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.065 14:19:26 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:21.065 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.065 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.325 Malloc0 00:13:21.325 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.325 14:19:26 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:21.325 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.325 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.325 Delay0 00:13:21.325 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.325 14:19:26 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:21.325 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.325 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.325 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.325 14:19:26 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:21.325 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.325 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.325 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.325 14:19:26 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:21.325 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.325 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.325 [2024-12-05 14:19:26.780714] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.325 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.325 14:19:26 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:21.325 14:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.325 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.325 14:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.325 14:19:26 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:21.325 [2024-12-05 14:19:26.966918] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:23.858 Initializing NVMe Controllers 00:13:23.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:23.858 controller IO queue size 128 less than required 00:13:23.858 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:23.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:23.858 Initialization complete. Launching workers. 00:13:23.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 38004 00:13:23.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38069, failed to submit 62 00:13:23.858 success 38004, unsuccess 65, failed 0 00:13:23.858 14:19:28 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:23.858 14:19:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.858 14:19:28 -- common/autotest_common.sh@10 -- # set +x 00:13:23.858 14:19:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.858 14:19:29 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:23.858 14:19:29 -- target/abort.sh@38 -- # nvmftestfini 00:13:23.858 14:19:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.858 14:19:29 -- nvmf/common.sh@116 -- # sync 00:13:23.858 14:19:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:23.858 14:19:29 -- nvmf/common.sh@119 -- # set +e 00:13:23.858 14:19:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.858 14:19:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:23.858 rmmod nvme_tcp 00:13:23.858 rmmod nvme_fabrics 00:13:23.858 rmmod nvme_keyring 00:13:23.858 14:19:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.858 14:19:29 -- nvmf/common.sh@123 -- # set -e 00:13:23.858 14:19:29 -- nvmf/common.sh@124 -- # return 0 00:13:23.858 14:19:29 -- nvmf/common.sh@477 -- # '[' -n 79045 ']' 00:13:23.858 14:19:29 -- nvmf/common.sh@478 -- # killprocess 79045 00:13:23.858 14:19:29 -- common/autotest_common.sh@936 -- # '[' -z 79045 ']' 00:13:23.858 14:19:29 -- common/autotest_common.sh@940 -- # kill -0 79045 00:13:23.858 14:19:29 -- common/autotest_common.sh@941 -- # uname 00:13:23.858 14:19:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.858 14:19:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79045 00:13:23.858 14:19:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:23.858 14:19:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:23.858 killing process with pid 79045 00:13:23.858 14:19:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79045' 00:13:23.858 14:19:29 -- common/autotest_common.sh@955 -- # kill 79045 00:13:23.858 14:19:29 -- common/autotest_common.sh@960 -- # wait 79045 00:13:23.858 14:19:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.858 14:19:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:23.858 14:19:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:23.858 14:19:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.858 14:19:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:23.858 14:19:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.858 14:19:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.858 14:19:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.858 14:19:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:23.858 00:13:23.858 real 0m4.396s 00:13:23.858 user 0m12.345s 00:13:23.858 sys 0m1.133s 00:13:23.858 14:19:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.858 14:19:29 -- common/autotest_common.sh@10 -- # set +x 00:13:23.858 ************************************ 00:13:23.858 END TEST nvmf_abort 00:13:23.858 ************************************ 00:13:24.118 14:19:29 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:24.118 14:19:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:24.118 14:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.118 14:19:29 -- common/autotest_common.sh@10 -- # set +x 00:13:24.118 ************************************ 00:13:24.118 START TEST nvmf_ns_hotplug_stress 00:13:24.118 ************************************ 00:13:24.118 14:19:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:24.118 * Looking for test storage... 00:13:24.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:24.118 14:19:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:24.118 14:19:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:24.118 14:19:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:24.118 14:19:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:24.118 14:19:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:24.118 14:19:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:24.118 14:19:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:24.118 14:19:29 -- scripts/common.sh@335 -- # IFS=.-: 00:13:24.118 14:19:29 -- scripts/common.sh@335 -- # read -ra ver1 00:13:24.118 14:19:29 -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.118 14:19:29 -- scripts/common.sh@336 -- # read -ra ver2 00:13:24.118 14:19:29 -- scripts/common.sh@337 -- # local 'op=<' 00:13:24.118 14:19:29 -- scripts/common.sh@339 -- # ver1_l=2 00:13:24.118 14:19:29 -- scripts/common.sh@340 -- # ver2_l=1 00:13:24.118 14:19:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:24.118 14:19:29 -- scripts/common.sh@343 -- # case "$op" in 00:13:24.118 14:19:29 -- scripts/common.sh@344 -- # : 1 00:13:24.118 14:19:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:24.118 14:19:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.118 14:19:29 -- scripts/common.sh@364 -- # decimal 1 00:13:24.118 14:19:29 -- scripts/common.sh@352 -- # local d=1 00:13:24.118 14:19:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.118 14:19:29 -- scripts/common.sh@354 -- # echo 1 00:13:24.118 14:19:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:24.118 14:19:29 -- scripts/common.sh@365 -- # decimal 2 00:13:24.118 14:19:29 -- scripts/common.sh@352 -- # local d=2 00:13:24.118 14:19:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.118 14:19:29 -- scripts/common.sh@354 -- # echo 2 00:13:24.118 14:19:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:24.118 14:19:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:24.118 14:19:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:24.118 14:19:29 -- scripts/common.sh@367 -- # return 0 00:13:24.118 14:19:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.118 14:19:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:24.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.118 --rc genhtml_branch_coverage=1 00:13:24.118 --rc genhtml_function_coverage=1 00:13:24.118 --rc genhtml_legend=1 00:13:24.118 --rc geninfo_all_blocks=1 00:13:24.118 --rc geninfo_unexecuted_blocks=1 00:13:24.118 00:13:24.118 ' 00:13:24.118 14:19:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:24.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.118 --rc genhtml_branch_coverage=1 00:13:24.118 --rc genhtml_function_coverage=1 00:13:24.118 --rc genhtml_legend=1 00:13:24.118 --rc geninfo_all_blocks=1 00:13:24.118 --rc geninfo_unexecuted_blocks=1 00:13:24.118 00:13:24.118 ' 00:13:24.118 14:19:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:24.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.118 --rc genhtml_branch_coverage=1 00:13:24.118 --rc genhtml_function_coverage=1 00:13:24.118 --rc genhtml_legend=1 00:13:24.118 --rc geninfo_all_blocks=1 00:13:24.118 --rc geninfo_unexecuted_blocks=1 00:13:24.118 00:13:24.118 ' 00:13:24.118 14:19:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:24.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.118 --rc genhtml_branch_coverage=1 00:13:24.118 --rc genhtml_function_coverage=1 00:13:24.118 --rc genhtml_legend=1 00:13:24.118 --rc geninfo_all_blocks=1 00:13:24.118 --rc geninfo_unexecuted_blocks=1 00:13:24.118 00:13:24.118 ' 00:13:24.118 14:19:29 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:24.118 14:19:29 -- nvmf/common.sh@7 -- # uname -s 00:13:24.118 14:19:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.118 14:19:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.118 14:19:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.118 14:19:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.118 14:19:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.118 14:19:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.118 14:19:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.118 14:19:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.118 14:19:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.118 14:19:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.119 14:19:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:13:24.119 14:19:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:13:24.119 14:19:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.119 14:19:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.119 14:19:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:24.119 14:19:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.119 14:19:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.119 14:19:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.119 14:19:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.119 14:19:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.119 14:19:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.119 14:19:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.119 14:19:29 -- paths/export.sh@5 -- # export PATH 00:13:24.119 14:19:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.119 14:19:29 -- nvmf/common.sh@46 -- # : 0 00:13:24.119 14:19:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:24.119 14:19:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:24.119 14:19:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:24.119 14:19:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.119 14:19:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.119 14:19:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:24.119 14:19:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:24.119 14:19:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:24.119 14:19:29 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.119 14:19:29 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:24.119 14:19:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:24.119 14:19:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.119 14:19:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:24.119 14:19:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:24.119 14:19:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:24.119 14:19:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.119 14:19:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.119 14:19:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.119 14:19:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:24.119 14:19:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:24.119 14:19:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:24.119 14:19:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:24.119 14:19:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:24.119 14:19:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:24.119 14:19:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.119 14:19:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.119 14:19:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:24.119 14:19:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:24.119 14:19:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:24.119 14:19:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:24.119 14:19:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:24.119 14:19:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.119 14:19:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:24.119 14:19:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:24.119 14:19:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:24.119 14:19:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:24.119 14:19:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:24.119 14:19:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:24.119 Cannot find device "nvmf_tgt_br" 00:13:24.119 14:19:29 -- nvmf/common.sh@154 -- # true 00:13:24.119 14:19:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:24.119 Cannot find device "nvmf_tgt_br2" 00:13:24.119 14:19:29 -- nvmf/common.sh@155 -- # true 00:13:24.119 14:19:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:24.119 14:19:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:24.119 Cannot find device "nvmf_tgt_br" 00:13:24.119 14:19:29 -- nvmf/common.sh@157 -- # true 00:13:24.119 14:19:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:24.119 Cannot find device "nvmf_tgt_br2" 00:13:24.119 14:19:29 -- nvmf/common.sh@158 -- # true 00:13:24.119 14:19:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:24.379 14:19:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:24.379 14:19:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.379 14:19:29 -- nvmf/common.sh@161 -- # true 00:13:24.379 14:19:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.379 14:19:29 -- nvmf/common.sh@162 -- # true 00:13:24.379 14:19:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:24.379 14:19:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:24.379 14:19:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:24.379 14:19:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:24.379 14:19:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:24.379 14:19:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:24.379 14:19:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:24.379 14:19:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:24.379 14:19:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:24.379 14:19:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:24.379 14:19:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:24.379 14:19:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:24.379 14:19:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:24.379 14:19:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:24.379 14:19:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:24.379 14:19:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:24.379 14:19:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:24.379 14:19:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:24.379 14:19:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:24.379 14:19:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:24.379 14:19:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:24.379 14:19:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:24.379 14:19:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:24.379 14:19:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:24.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:13:24.379 00:13:24.379 --- 10.0.0.2 ping statistics --- 00:13:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.379 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:24.379 14:19:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:24.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:24.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:13:24.379 00:13:24.379 --- 10.0.0.3 ping statistics --- 00:13:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.379 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:24.379 14:19:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:24.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:13:24.379 00:13:24.379 --- 10.0.0.1 ping statistics --- 00:13:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.379 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:13:24.379 14:19:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.379 14:19:30 -- nvmf/common.sh@421 -- # return 0 00:13:24.379 14:19:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:24.379 14:19:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.379 14:19:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:24.379 14:19:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:24.379 14:19:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.379 14:19:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:24.379 14:19:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:24.638 14:19:30 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:24.638 14:19:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:24.638 14:19:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.638 14:19:30 -- common/autotest_common.sh@10 -- # set +x 00:13:24.638 14:19:30 -- nvmf/common.sh@469 -- # nvmfpid=79315 00:13:24.638 14:19:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:24.638 14:19:30 -- nvmf/common.sh@470 -- # waitforlisten 79315 00:13:24.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.638 14:19:30 -- common/autotest_common.sh@829 -- # '[' -z 79315 ']' 00:13:24.638 14:19:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.638 14:19:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.638 14:19:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.638 14:19:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.638 14:19:30 -- common/autotest_common.sh@10 -- # set +x 00:13:24.638 [2024-12-05 14:19:30.091578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:24.638 [2024-12-05 14:19:30.091833] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.638 [2024-12-05 14:19:30.226060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:24.896 [2024-12-05 14:19:30.300728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.896 [2024-12-05 14:19:30.300878] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.896 [2024-12-05 14:19:30.300891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.896 [2024-12-05 14:19:30.300899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.897 [2024-12-05 14:19:30.301037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.897 [2024-12-05 14:19:30.301171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.897 [2024-12-05 14:19:30.301178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.463 14:19:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.463 14:19:31 -- common/autotest_common.sh@862 -- # return 0 00:13:25.463 14:19:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:25.463 14:19:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.463 14:19:31 -- common/autotest_common.sh@10 -- # set +x 00:13:25.463 14:19:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.463 14:19:31 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:25.463 14:19:31 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.722 [2024-12-05 14:19:31.358028] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.981 14:19:31 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.239 14:19:31 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.498 [2024-12-05 14:19:31.914929] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.498 14:19:31 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.757 14:19:32 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:27.015 Malloc0 00:13:27.015 14:19:32 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:27.015 Delay0 00:13:27.015 14:19:32 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.273 14:19:32 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:27.530 NULL1 00:13:27.530 14:19:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:27.788 14:19:33 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79453 00:13:27.788 14:19:33 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:27.788 14:19:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:27.788 14:19:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.046 Read completed with error (sct=0, sc=11) 00:13:28.046 14:19:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.303 14:19:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:28.303 14:19:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:28.560 true 00:13:28.560 14:19:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:28.560 14:19:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.127 14:19:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.692 14:19:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:29.692 14:19:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:29.692 true 00:13:29.692 14:19:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:29.692 14:19:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.950 14:19:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.207 14:19:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:30.207 14:19:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:30.464 true 00:13:30.464 14:19:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:30.464 14:19:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.396 14:19:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.396 14:19:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:31.396 14:19:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:31.653 true 00:13:31.653 14:19:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:31.653 14:19:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.911 14:19:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.168 14:19:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:32.168 14:19:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:32.425 true 00:13:32.425 14:19:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:32.425 14:19:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.359 14:19:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.616 14:19:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:33.616 14:19:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:33.874 true 00:13:33.874 14:19:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:33.874 14:19:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.131 14:19:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.132 14:19:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:34.132 14:19:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:34.389 true 00:13:34.389 14:19:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:34.389 14:19:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.320 14:19:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.577 14:19:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:35.577 14:19:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:35.834 true 00:13:35.834 14:19:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:35.834 14:19:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.093 14:19:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.352 14:19:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:36.352 14:19:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:36.352 true 00:13:36.352 14:19:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:36.352 14:19:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.282 14:19:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.539 14:19:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:37.539 14:19:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:37.796 true 00:13:37.796 14:19:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:37.796 14:19:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.053 14:19:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.053 14:19:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:38.053 14:19:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:38.309 true 00:13:38.309 14:19:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:38.309 14:19:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.263 14:19:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.521 14:19:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:39.521 14:19:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:39.779 true 00:13:39.779 14:19:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:39.779 14:19:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.036 14:19:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.294 14:19:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:40.294 14:19:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:40.294 true 00:13:40.294 14:19:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:40.294 14:19:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.261 14:19:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.539 14:19:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:41.539 14:19:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:41.798 true 00:13:41.798 14:19:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:41.798 14:19:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.058 14:19:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.058 14:19:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:42.058 14:19:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:42.316 true 00:13:42.316 14:19:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:42.316 14:19:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.252 14:19:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.511 14:19:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:43.511 14:19:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:43.769 true 00:13:43.769 14:19:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:43.769 14:19:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.028 14:19:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.287 14:19:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:44.287 14:19:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:44.546 true 00:13:44.546 14:19:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:44.546 14:19:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.481 14:19:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.740 14:19:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:45.740 14:19:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:45.998 true 00:13:45.998 14:19:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:45.998 14:19:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.998 14:19:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.256 14:19:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:46.256 14:19:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:46.514 true 00:13:46.514 14:19:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:46.514 14:19:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.450 14:19:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.708 14:19:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:47.708 14:19:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:47.966 true 00:13:47.966 14:19:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:47.966 14:19:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.966 14:19:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.223 14:19:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:48.223 14:19:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:48.480 true 00:13:48.480 14:19:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:48.480 14:19:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.413 14:19:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.669 14:19:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:49.669 14:19:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:49.926 true 00:13:49.926 14:19:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:49.926 14:19:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.184 14:19:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.184 14:19:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:50.184 14:19:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:50.442 true 00:13:50.442 14:19:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:50.442 14:19:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.377 14:19:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.634 14:19:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:51.635 14:19:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:51.893 true 00:13:51.893 14:19:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:51.893 14:19:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.151 14:19:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.410 14:19:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:52.410 14:19:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:52.668 true 00:13:52.668 14:19:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:52.668 14:19:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.603 14:19:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.603 14:19:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:53.603 14:19:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:53.861 true 00:13:53.861 14:19:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:53.861 14:19:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.119 14:19:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.377 14:19:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:54.377 14:19:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:54.377 true 00:13:54.636 14:20:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:54.636 14:20:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.571 14:20:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.571 14:20:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:55.571 14:20:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:55.829 true 00:13:55.829 14:20:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:55.829 14:20:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.086 14:20:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.344 14:20:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:56.344 14:20:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:56.602 true 00:13:56.602 14:20:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:56.602 14:20:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.861 14:20:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.119 14:20:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:57.119 14:20:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:57.377 true 00:13:57.377 14:20:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:57.377 14:20:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.313 Initializing NVMe Controllers 00:13:58.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.313 Controller IO queue size 128, less than required. 00:13:58.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.313 Controller IO queue size 128, less than required. 00:13:58.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.313 Initialization complete. Launching workers. 00:13:58.313 ======================================================== 00:13:58.313 Latency(us) 00:13:58.313 Device Information : IOPS MiB/s Average min max 00:13:58.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 368.27 0.18 186365.66 2700.85 1121287.05 00:13:58.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14035.04 6.85 9119.70 1342.50 551091.45 00:13:58.313 ======================================================== 00:13:58.313 Total : 14403.30 7.03 13651.55 1342.50 1121287.05 00:13:58.313 00:13:58.313 14:20:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.572 14:20:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:58.572 14:20:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:58.830 true 00:13:58.830 14:20:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79453 00:13:58.830 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79453) - No such process 00:13:58.830 14:20:04 -- target/ns_hotplug_stress.sh@53 -- # wait 79453 00:13:58.830 14:20:04 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.088 14:20:04 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.346 14:20:04 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:59.346 14:20:04 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:59.346 14:20:04 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:59.346 14:20:04 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.346 14:20:04 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:59.603 null0 00:13:59.603 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:59.603 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.603 14:20:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:59.860 null1 00:13:59.860 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:59.860 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.860 14:20:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:59.860 null2 00:14:00.117 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.117 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.117 14:20:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:00.375 null3 00:14:00.375 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.375 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.375 14:20:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:00.375 null4 00:14:00.375 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.375 14:20:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.375 14:20:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:00.632 null5 00:14:00.632 14:20:06 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.632 14:20:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.632 14:20:06 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:00.888 null6 00:14:00.888 14:20:06 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.888 14:20:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.888 14:20:06 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:01.146 null7 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.146 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@66 -- # wait 80523 80524 80527 80529 80530 80532 80535 80537 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.147 14:20:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.404 14:20:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.404 14:20:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.404 14:20:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.404 14:20:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.404 14:20:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.404 14:20:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.662 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.919 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.177 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.435 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.435 14:20:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.435 14:20:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.435 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.435 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.435 14:20:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.435 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.435 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.435 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.435 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.693 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.951 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.952 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.209 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.466 14:20:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.466 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.466 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.466 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.723 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.979 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.980 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.236 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.493 14:20:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.493 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.493 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.493 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.493 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.751 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.009 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.267 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.525 14:20:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.525 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.525 14:20:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.525 14:20:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.525 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.784 14:20:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.043 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:06.301 14:20:11 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:06.301 14:20:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:06.301 14:20:11 -- nvmf/common.sh@116 -- # sync 00:14:06.301 14:20:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:06.301 14:20:11 -- nvmf/common.sh@119 -- # set +e 00:14:06.301 14:20:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:06.301 14:20:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:06.301 rmmod nvme_tcp 00:14:06.560 rmmod nvme_fabrics 00:14:06.560 rmmod nvme_keyring 00:14:06.560 14:20:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:06.560 14:20:11 -- nvmf/common.sh@123 -- # set -e 00:14:06.560 14:20:11 -- nvmf/common.sh@124 -- # return 0 00:14:06.560 14:20:11 -- nvmf/common.sh@477 -- # '[' -n 79315 ']' 00:14:06.560 14:20:11 -- nvmf/common.sh@478 -- # killprocess 79315 00:14:06.560 14:20:11 -- common/autotest_common.sh@936 -- # '[' -z 79315 ']' 00:14:06.560 14:20:11 -- common/autotest_common.sh@940 -- # kill -0 79315 00:14:06.560 14:20:11 -- common/autotest_common.sh@941 -- # uname 00:14:06.560 14:20:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:06.560 14:20:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79315 00:14:06.560 14:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:06.560 14:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:06.560 killing process with pid 79315 00:14:06.560 14:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79315' 00:14:06.560 14:20:12 -- common/autotest_common.sh@955 -- # kill 79315 00:14:06.560 14:20:12 -- common/autotest_common.sh@960 -- # wait 79315 00:14:06.818 14:20:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:06.818 14:20:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:06.818 14:20:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:06.818 14:20:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.818 14:20:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.818 14:20:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.818 14:20:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.818 14:20:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.818 14:20:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:06.818 00:14:06.818 real 0m42.817s 00:14:06.818 user 3m23.012s 00:14:06.818 sys 0m12.232s 00:14:06.818 14:20:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.818 14:20:12 -- common/autotest_common.sh@10 -- # set +x 00:14:06.818 ************************************ 00:14:06.818 END TEST nvmf_ns_hotplug_stress 00:14:06.818 ************************************ 00:14:06.818 14:20:12 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.818 14:20:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.818 14:20:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.818 14:20:12 -- common/autotest_common.sh@10 -- # set +x 00:14:06.818 ************************************ 00:14:06.818 START TEST nvmf_connect_stress 00:14:06.818 ************************************ 00:14:06.818 14:20:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:07.078 * Looking for test storage... 00:14:07.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.078 14:20:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:07.078 14:20:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:07.078 14:20:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:07.078 14:20:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:07.078 14:20:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:07.078 14:20:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:07.078 14:20:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:07.078 14:20:12 -- scripts/common.sh@335 -- # IFS=.-: 00:14:07.078 14:20:12 -- scripts/common.sh@335 -- # read -ra ver1 00:14:07.078 14:20:12 -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.078 14:20:12 -- scripts/common.sh@336 -- # read -ra ver2 00:14:07.078 14:20:12 -- scripts/common.sh@337 -- # local 'op=<' 00:14:07.078 14:20:12 -- scripts/common.sh@339 -- # ver1_l=2 00:14:07.078 14:20:12 -- scripts/common.sh@340 -- # ver2_l=1 00:14:07.078 14:20:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:07.078 14:20:12 -- scripts/common.sh@343 -- # case "$op" in 00:14:07.078 14:20:12 -- scripts/common.sh@344 -- # : 1 00:14:07.078 14:20:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:07.078 14:20:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.078 14:20:12 -- scripts/common.sh@364 -- # decimal 1 00:14:07.078 14:20:12 -- scripts/common.sh@352 -- # local d=1 00:14:07.078 14:20:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.078 14:20:12 -- scripts/common.sh@354 -- # echo 1 00:14:07.078 14:20:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:07.078 14:20:12 -- scripts/common.sh@365 -- # decimal 2 00:14:07.078 14:20:12 -- scripts/common.sh@352 -- # local d=2 00:14:07.078 14:20:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.078 14:20:12 -- scripts/common.sh@354 -- # echo 2 00:14:07.078 14:20:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:07.078 14:20:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:07.078 14:20:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:07.078 14:20:12 -- scripts/common.sh@367 -- # return 0 00:14:07.078 14:20:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.078 14:20:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.078 --rc genhtml_branch_coverage=1 00:14:07.078 --rc genhtml_function_coverage=1 00:14:07.078 --rc genhtml_legend=1 00:14:07.078 --rc geninfo_all_blocks=1 00:14:07.078 --rc geninfo_unexecuted_blocks=1 00:14:07.078 00:14:07.078 ' 00:14:07.078 14:20:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.078 --rc genhtml_branch_coverage=1 00:14:07.078 --rc genhtml_function_coverage=1 00:14:07.078 --rc genhtml_legend=1 00:14:07.078 --rc geninfo_all_blocks=1 00:14:07.078 --rc geninfo_unexecuted_blocks=1 00:14:07.078 00:14:07.078 ' 00:14:07.078 14:20:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.078 --rc genhtml_branch_coverage=1 00:14:07.078 --rc genhtml_function_coverage=1 00:14:07.078 --rc genhtml_legend=1 00:14:07.078 --rc geninfo_all_blocks=1 00:14:07.078 --rc geninfo_unexecuted_blocks=1 00:14:07.078 00:14:07.078 ' 00:14:07.078 14:20:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.078 --rc genhtml_branch_coverage=1 00:14:07.078 --rc genhtml_function_coverage=1 00:14:07.078 --rc genhtml_legend=1 00:14:07.078 --rc geninfo_all_blocks=1 00:14:07.078 --rc geninfo_unexecuted_blocks=1 00:14:07.078 00:14:07.078 ' 00:14:07.078 14:20:12 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.078 14:20:12 -- nvmf/common.sh@7 -- # uname -s 00:14:07.078 14:20:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.078 14:20:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.078 14:20:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.078 14:20:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.078 14:20:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.078 14:20:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.078 14:20:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.078 14:20:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.078 14:20:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.078 14:20:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.078 14:20:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:07.078 14:20:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:07.078 14:20:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.078 14:20:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.078 14:20:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.078 14:20:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.078 14:20:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.078 14:20:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.078 14:20:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.078 14:20:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.078 14:20:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.078 14:20:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.078 14:20:12 -- paths/export.sh@5 -- # export PATH 00:14:07.078 14:20:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.078 14:20:12 -- nvmf/common.sh@46 -- # : 0 00:14:07.078 14:20:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:07.078 14:20:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:07.078 14:20:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:07.078 14:20:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.078 14:20:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.078 14:20:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:07.078 14:20:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:07.078 14:20:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:07.078 14:20:12 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:07.078 14:20:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:07.078 14:20:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.078 14:20:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:07.078 14:20:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:07.078 14:20:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:07.078 14:20:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.078 14:20:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.078 14:20:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.078 14:20:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:07.078 14:20:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:07.078 14:20:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:07.078 14:20:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:07.078 14:20:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:07.078 14:20:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:07.078 14:20:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.078 14:20:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.078 14:20:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.078 14:20:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:07.078 14:20:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.078 14:20:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.078 14:20:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.078 14:20:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.078 14:20:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.078 14:20:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.078 14:20:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.078 14:20:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.078 14:20:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:07.078 14:20:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:07.078 Cannot find device "nvmf_tgt_br" 00:14:07.078 14:20:12 -- nvmf/common.sh@154 -- # true 00:14:07.078 14:20:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.079 Cannot find device "nvmf_tgt_br2" 00:14:07.079 14:20:12 -- nvmf/common.sh@155 -- # true 00:14:07.079 14:20:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:07.079 14:20:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:07.079 Cannot find device "nvmf_tgt_br" 00:14:07.079 14:20:12 -- nvmf/common.sh@157 -- # true 00:14:07.079 14:20:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:07.079 Cannot find device "nvmf_tgt_br2" 00:14:07.079 14:20:12 -- nvmf/common.sh@158 -- # true 00:14:07.079 14:20:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:07.079 14:20:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:07.079 14:20:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.079 14:20:12 -- nvmf/common.sh@161 -- # true 00:14:07.079 14:20:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.079 14:20:12 -- nvmf/common.sh@162 -- # true 00:14:07.079 14:20:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.079 14:20:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.079 14:20:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.337 14:20:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.337 14:20:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.337 14:20:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.337 14:20:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.337 14:20:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:07.337 14:20:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:07.337 14:20:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:07.337 14:20:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:07.337 14:20:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:07.337 14:20:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:07.337 14:20:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.337 14:20:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.337 14:20:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.337 14:20:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:07.337 14:20:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:07.337 14:20:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.337 14:20:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.337 14:20:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.337 14:20:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.337 14:20:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.337 14:20:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:07.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:07.337 00:14:07.337 --- 10.0.0.2 ping statistics --- 00:14:07.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.337 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:07.337 14:20:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:07.337 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.337 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:07.337 00:14:07.337 --- 10.0.0.3 ping statistics --- 00:14:07.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.337 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:07.337 14:20:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:07.337 00:14:07.337 --- 10.0.0.1 ping statistics --- 00:14:07.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.337 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:07.337 14:20:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.337 14:20:12 -- nvmf/common.sh@421 -- # return 0 00:14:07.337 14:20:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:07.337 14:20:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.337 14:20:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:07.337 14:20:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:07.337 14:20:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.337 14:20:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:07.337 14:20:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:07.337 14:20:12 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:07.337 14:20:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:07.337 14:20:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.337 14:20:12 -- common/autotest_common.sh@10 -- # set +x 00:14:07.337 14:20:12 -- nvmf/common.sh@469 -- # nvmfpid=81846 00:14:07.337 14:20:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.337 14:20:12 -- nvmf/common.sh@470 -- # waitforlisten 81846 00:14:07.337 14:20:12 -- common/autotest_common.sh@829 -- # '[' -z 81846 ']' 00:14:07.337 14:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.337 14:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.337 14:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.337 14:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.337 14:20:12 -- common/autotest_common.sh@10 -- # set +x 00:14:07.337 [2024-12-05 14:20:12.964440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:07.337 [2024-12-05 14:20:12.964519] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.609 [2024-12-05 14:20:13.097670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.609 [2024-12-05 14:20:13.174027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:07.610 [2024-12-05 14:20:13.174172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.610 [2024-12-05 14:20:13.174185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.610 [2024-12-05 14:20:13.174192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.610 [2024-12-05 14:20:13.174388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.610 [2024-12-05 14:20:13.174991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.610 [2024-12-05 14:20:13.174999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.588 14:20:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.588 14:20:13 -- common/autotest_common.sh@862 -- # return 0 00:14:08.588 14:20:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:08.588 14:20:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.588 14:20:13 -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 14:20:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.588 14:20:14 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.588 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.588 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 [2024-12-05 14:20:14.044583] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.588 14:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.588 14:20:14 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.588 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.588 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 14:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.588 14:20:14 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.588 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.588 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 [2024-12-05 14:20:14.062450] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.588 14:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.588 14:20:14 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.588 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.588 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 NULL1 00:14:08.588 14:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.588 14:20:14 -- target/connect_stress.sh@21 -- # PERF_PID=81905 00:14:08.589 14:20:14 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:08.589 14:20:14 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:08.589 14:20:14 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.589 14:20:14 -- target/connect_stress.sh@28 -- # cat 00:14:08.589 14:20:14 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:08.589 14:20:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.589 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.589 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:08.847 14:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.847 14:20:14 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:08.847 14:20:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.847 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.847 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.412 14:20:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.412 14:20:14 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:09.412 14:20:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.412 14:20:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.412 14:20:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.678 14:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.678 14:20:15 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:09.678 14:20:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.678 14:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.678 14:20:15 -- common/autotest_common.sh@10 -- # set +x 00:14:09.936 14:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.936 14:20:15 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:09.936 14:20:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.936 14:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.936 14:20:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.193 14:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.193 14:20:15 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:10.193 14:20:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.193 14:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.193 14:20:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.451 14:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.451 14:20:16 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:10.451 14:20:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.451 14:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.451 14:20:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.016 14:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.016 14:20:16 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:11.016 14:20:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.016 14:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.016 14:20:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.274 14:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.274 14:20:16 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:11.274 14:20:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.274 14:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.274 14:20:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.532 14:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.532 14:20:17 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:11.532 14:20:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.533 14:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.533 14:20:17 -- common/autotest_common.sh@10 -- # set +x 00:14:11.791 14:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.791 14:20:17 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:11.791 14:20:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.791 14:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.791 14:20:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.358 14:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.358 14:20:17 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:12.358 14:20:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.358 14:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.358 14:20:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.616 14:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.616 14:20:18 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:12.616 14:20:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.616 14:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.616 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:14:12.875 14:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.875 14:20:18 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:12.875 14:20:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.875 14:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.875 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.134 14:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.134 14:20:18 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:13.134 14:20:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.134 14:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.134 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.392 14:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.392 14:20:18 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:13.392 14:20:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.392 14:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.392 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.959 14:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.959 14:20:19 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:13.959 14:20:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.959 14:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.959 14:20:19 -- common/autotest_common.sh@10 -- # set +x 00:14:14.218 14:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.218 14:20:19 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:14.218 14:20:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.218 14:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.218 14:20:19 -- common/autotest_common.sh@10 -- # set +x 00:14:14.477 14:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.477 14:20:19 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:14.477 14:20:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.477 14:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.477 14:20:19 -- common/autotest_common.sh@10 -- # set +x 00:14:14.736 14:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.736 14:20:20 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:14.736 14:20:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.736 14:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.736 14:20:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.994 14:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.994 14:20:20 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:14.994 14:20:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.994 14:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.994 14:20:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.561 14:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.561 14:20:20 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:15.561 14:20:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.561 14:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.561 14:20:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.819 14:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.819 14:20:21 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:15.819 14:20:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.819 14:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.819 14:20:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.078 14:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.078 14:20:21 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:16.078 14:20:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.078 14:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.078 14:20:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.337 14:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.337 14:20:21 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:16.337 14:20:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.337 14:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.337 14:20:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.596 14:20:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.596 14:20:22 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:16.596 14:20:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.596 14:20:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.596 14:20:22 -- common/autotest_common.sh@10 -- # set +x 00:14:17.163 14:20:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.163 14:20:22 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:17.163 14:20:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.163 14:20:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.163 14:20:22 -- common/autotest_common.sh@10 -- # set +x 00:14:17.421 14:20:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.421 14:20:22 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:17.421 14:20:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.421 14:20:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.421 14:20:22 -- common/autotest_common.sh@10 -- # set +x 00:14:17.679 14:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.679 14:20:23 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:17.679 14:20:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.679 14:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.679 14:20:23 -- common/autotest_common.sh@10 -- # set +x 00:14:17.938 14:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.938 14:20:23 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:17.938 14:20:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.938 14:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.938 14:20:23 -- common/autotest_common.sh@10 -- # set +x 00:14:18.196 14:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.196 14:20:23 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:18.196 14:20:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.196 14:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.196 14:20:23 -- common/autotest_common.sh@10 -- # set +x 00:14:18.763 14:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.763 14:20:24 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:18.763 14:20:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.763 14:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.763 14:20:24 -- common/autotest_common.sh@10 -- # set +x 00:14:18.763 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.022 14:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.022 14:20:24 -- target/connect_stress.sh@34 -- # kill -0 81905 00:14:19.022 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81905) - No such process 00:14:19.022 14:20:24 -- target/connect_stress.sh@38 -- # wait 81905 00:14:19.022 14:20:24 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:19.022 14:20:24 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:19.022 14:20:24 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:19.022 14:20:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:19.022 14:20:24 -- nvmf/common.sh@116 -- # sync 00:14:19.022 14:20:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:19.022 14:20:24 -- nvmf/common.sh@119 -- # set +e 00:14:19.022 14:20:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:19.022 14:20:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:19.022 rmmod nvme_tcp 00:14:19.022 rmmod nvme_fabrics 00:14:19.022 rmmod nvme_keyring 00:14:19.022 14:20:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:19.022 14:20:24 -- nvmf/common.sh@123 -- # set -e 00:14:19.022 14:20:24 -- nvmf/common.sh@124 -- # return 0 00:14:19.022 14:20:24 -- nvmf/common.sh@477 -- # '[' -n 81846 ']' 00:14:19.022 14:20:24 -- nvmf/common.sh@478 -- # killprocess 81846 00:14:19.022 14:20:24 -- common/autotest_common.sh@936 -- # '[' -z 81846 ']' 00:14:19.022 14:20:24 -- common/autotest_common.sh@940 -- # kill -0 81846 00:14:19.022 14:20:24 -- common/autotest_common.sh@941 -- # uname 00:14:19.022 14:20:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:19.022 14:20:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81846 00:14:19.022 killing process with pid 81846 00:14:19.022 14:20:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:19.022 14:20:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:19.022 14:20:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81846' 00:14:19.022 14:20:24 -- common/autotest_common.sh@955 -- # kill 81846 00:14:19.022 14:20:24 -- common/autotest_common.sh@960 -- # wait 81846 00:14:19.282 14:20:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:19.282 14:20:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:19.282 14:20:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:19.282 14:20:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.282 14:20:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:19.282 14:20:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.282 14:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.282 14:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.282 14:20:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:19.282 ************************************ 00:14:19.282 END TEST nvmf_connect_stress 00:14:19.282 ************************************ 00:14:19.282 00:14:19.282 real 0m12.528s 00:14:19.282 user 0m41.933s 00:14:19.282 sys 0m3.055s 00:14:19.282 14:20:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:19.282 14:20:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.541 14:20:24 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:19.541 14:20:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.541 14:20:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.541 14:20:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.541 ************************************ 00:14:19.541 START TEST nvmf_fused_ordering 00:14:19.541 ************************************ 00:14:19.541 14:20:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:19.541 * Looking for test storage... 00:14:19.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:19.541 14:20:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:19.541 14:20:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:19.541 14:20:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:19.541 14:20:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:19.541 14:20:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:19.541 14:20:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:19.541 14:20:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:19.541 14:20:25 -- scripts/common.sh@335 -- # IFS=.-: 00:14:19.541 14:20:25 -- scripts/common.sh@335 -- # read -ra ver1 00:14:19.541 14:20:25 -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.541 14:20:25 -- scripts/common.sh@336 -- # read -ra ver2 00:14:19.541 14:20:25 -- scripts/common.sh@337 -- # local 'op=<' 00:14:19.541 14:20:25 -- scripts/common.sh@339 -- # ver1_l=2 00:14:19.541 14:20:25 -- scripts/common.sh@340 -- # ver2_l=1 00:14:19.541 14:20:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:19.541 14:20:25 -- scripts/common.sh@343 -- # case "$op" in 00:14:19.541 14:20:25 -- scripts/common.sh@344 -- # : 1 00:14:19.541 14:20:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:19.541 14:20:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.541 14:20:25 -- scripts/common.sh@364 -- # decimal 1 00:14:19.541 14:20:25 -- scripts/common.sh@352 -- # local d=1 00:14:19.541 14:20:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.541 14:20:25 -- scripts/common.sh@354 -- # echo 1 00:14:19.541 14:20:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:19.541 14:20:25 -- scripts/common.sh@365 -- # decimal 2 00:14:19.541 14:20:25 -- scripts/common.sh@352 -- # local d=2 00:14:19.541 14:20:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.541 14:20:25 -- scripts/common.sh@354 -- # echo 2 00:14:19.541 14:20:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:19.541 14:20:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:19.541 14:20:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:19.541 14:20:25 -- scripts/common.sh@367 -- # return 0 00:14:19.541 14:20:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.541 14:20:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:19.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.541 --rc genhtml_branch_coverage=1 00:14:19.541 --rc genhtml_function_coverage=1 00:14:19.541 --rc genhtml_legend=1 00:14:19.541 --rc geninfo_all_blocks=1 00:14:19.541 --rc geninfo_unexecuted_blocks=1 00:14:19.541 00:14:19.541 ' 00:14:19.541 14:20:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:19.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.541 --rc genhtml_branch_coverage=1 00:14:19.541 --rc genhtml_function_coverage=1 00:14:19.541 --rc genhtml_legend=1 00:14:19.541 --rc geninfo_all_blocks=1 00:14:19.541 --rc geninfo_unexecuted_blocks=1 00:14:19.541 00:14:19.541 ' 00:14:19.541 14:20:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:19.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.541 --rc genhtml_branch_coverage=1 00:14:19.541 --rc genhtml_function_coverage=1 00:14:19.541 --rc genhtml_legend=1 00:14:19.541 --rc geninfo_all_blocks=1 00:14:19.541 --rc geninfo_unexecuted_blocks=1 00:14:19.541 00:14:19.541 ' 00:14:19.541 14:20:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:19.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.541 --rc genhtml_branch_coverage=1 00:14:19.541 --rc genhtml_function_coverage=1 00:14:19.541 --rc genhtml_legend=1 00:14:19.541 --rc geninfo_all_blocks=1 00:14:19.541 --rc geninfo_unexecuted_blocks=1 00:14:19.541 00:14:19.541 ' 00:14:19.541 14:20:25 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.541 14:20:25 -- nvmf/common.sh@7 -- # uname -s 00:14:19.541 14:20:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.541 14:20:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.541 14:20:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.541 14:20:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.541 14:20:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.541 14:20:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.541 14:20:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.541 14:20:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.541 14:20:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.541 14:20:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.541 14:20:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:19.541 14:20:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:19.541 14:20:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.541 14:20:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.541 14:20:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.541 14:20:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.541 14:20:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.541 14:20:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.541 14:20:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.541 14:20:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.541 14:20:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.541 14:20:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.541 14:20:25 -- paths/export.sh@5 -- # export PATH 00:14:19.541 14:20:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.541 14:20:25 -- nvmf/common.sh@46 -- # : 0 00:14:19.542 14:20:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:19.542 14:20:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:19.542 14:20:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:19.542 14:20:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.542 14:20:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.542 14:20:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:19.542 14:20:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:19.542 14:20:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:19.542 14:20:25 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:19.542 14:20:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:19.542 14:20:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.800 14:20:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:19.800 14:20:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:19.800 14:20:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:19.800 14:20:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.800 14:20:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.800 14:20:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.800 14:20:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:19.800 14:20:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:19.800 14:20:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:19.800 14:20:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:19.800 14:20:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:19.800 14:20:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:19.800 14:20:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.800 14:20:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.800 14:20:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.800 14:20:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:19.800 14:20:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.800 14:20:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.800 14:20:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.800 14:20:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.800 14:20:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.800 14:20:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.800 14:20:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.800 14:20:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.800 14:20:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:19.800 14:20:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:19.800 Cannot find device "nvmf_tgt_br" 00:14:19.800 14:20:25 -- nvmf/common.sh@154 -- # true 00:14:19.800 14:20:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.800 Cannot find device "nvmf_tgt_br2" 00:14:19.800 14:20:25 -- nvmf/common.sh@155 -- # true 00:14:19.800 14:20:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:19.800 14:20:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:19.800 Cannot find device "nvmf_tgt_br" 00:14:19.800 14:20:25 -- nvmf/common.sh@157 -- # true 00:14:19.800 14:20:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:19.800 Cannot find device "nvmf_tgt_br2" 00:14:19.800 14:20:25 -- nvmf/common.sh@158 -- # true 00:14:19.800 14:20:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:19.800 14:20:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:19.800 14:20:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.800 14:20:25 -- nvmf/common.sh@161 -- # true 00:14:19.800 14:20:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.800 14:20:25 -- nvmf/common.sh@162 -- # true 00:14:19.800 14:20:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.800 14:20:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.800 14:20:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.800 14:20:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.800 14:20:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.800 14:20:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.800 14:20:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.800 14:20:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.800 14:20:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.800 14:20:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:19.800 14:20:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:19.800 14:20:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:19.800 14:20:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:19.800 14:20:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.800 14:20:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.800 14:20:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.800 14:20:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:19.800 14:20:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:20.059 14:20:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:20.059 14:20:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:20.059 14:20:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:20.059 14:20:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:20.059 14:20:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:20.059 14:20:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:20.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:14:20.059 00:14:20.059 --- 10.0.0.2 ping statistics --- 00:14:20.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.059 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:20.059 14:20:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:20.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:20.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:20.059 00:14:20.059 --- 10.0.0.3 ping statistics --- 00:14:20.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.059 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:20.059 14:20:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:20.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:20.059 00:14:20.059 --- 10.0.0.1 ping statistics --- 00:14:20.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.059 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:20.059 14:20:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.059 14:20:25 -- nvmf/common.sh@421 -- # return 0 00:14:20.059 14:20:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:20.059 14:20:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.059 14:20:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:20.059 14:20:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:20.059 14:20:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.059 14:20:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:20.059 14:20:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:20.059 14:20:25 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:20.059 14:20:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:20.059 14:20:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.059 14:20:25 -- common/autotest_common.sh@10 -- # set +x 00:14:20.059 14:20:25 -- nvmf/common.sh@469 -- # nvmfpid=82237 00:14:20.059 14:20:25 -- nvmf/common.sh@470 -- # waitforlisten 82237 00:14:20.059 14:20:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.059 14:20:25 -- common/autotest_common.sh@829 -- # '[' -z 82237 ']' 00:14:20.059 14:20:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.059 14:20:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.059 14:20:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.059 14:20:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.059 14:20:25 -- common/autotest_common.sh@10 -- # set +x 00:14:20.059 [2024-12-05 14:20:25.591665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:20.059 [2024-12-05 14:20:25.591938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.317 [2024-12-05 14:20:25.733563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.317 [2024-12-05 14:20:25.803354] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:20.317 [2024-12-05 14:20:25.803828] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.317 [2024-12-05 14:20:25.803861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.317 [2024-12-05 14:20:25.803874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.317 [2024-12-05 14:20:25.803919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.249 14:20:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.249 14:20:26 -- common/autotest_common.sh@862 -- # return 0 00:14:21.249 14:20:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:21.249 14:20:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 14:20:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.249 14:20:26 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.249 14:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 [2024-12-05 14:20:26.673975] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.249 14:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.249 14:20:26 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.249 14:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 14:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.249 14:20:26 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.249 14:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 [2024-12-05 14:20:26.690194] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.249 14:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.249 14:20:26 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:21.249 14:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 NULL1 00:14:21.249 14:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.249 14:20:26 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:21.249 14:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 14:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.249 14:20:26 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:21.249 14:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.249 14:20:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 14:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.249 14:20:26 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:21.249 [2024-12-05 14:20:26.741932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:21.250 [2024-12-05 14:20:26.742146] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82287 ] 00:14:21.508 Attached to nqn.2016-06.io.spdk:cnode1 00:14:21.508 Namespace ID: 1 size: 1GB 00:14:21.508 fused_ordering(0) 00:14:21.508 fused_ordering(1) 00:14:21.508 fused_ordering(2) 00:14:21.508 fused_ordering(3) 00:14:21.508 fused_ordering(4) 00:14:21.508 fused_ordering(5) 00:14:21.508 fused_ordering(6) 00:14:21.508 fused_ordering(7) 00:14:21.508 fused_ordering(8) 00:14:21.508 fused_ordering(9) 00:14:21.508 fused_ordering(10) 00:14:21.508 fused_ordering(11) 00:14:21.508 fused_ordering(12) 00:14:21.508 fused_ordering(13) 00:14:21.508 fused_ordering(14) 00:14:21.508 fused_ordering(15) 00:14:21.508 fused_ordering(16) 00:14:21.508 fused_ordering(17) 00:14:21.508 fused_ordering(18) 00:14:21.508 fused_ordering(19) 00:14:21.508 fused_ordering(20) 00:14:21.508 fused_ordering(21) 00:14:21.508 fused_ordering(22) 00:14:21.508 fused_ordering(23) 00:14:21.508 fused_ordering(24) 00:14:21.508 fused_ordering(25) 00:14:21.508 fused_ordering(26) 00:14:21.508 fused_ordering(27) 00:14:21.508 fused_ordering(28) 00:14:21.508 fused_ordering(29) 00:14:21.508 fused_ordering(30) 00:14:21.508 fused_ordering(31) 00:14:21.508 fused_ordering(32) 00:14:21.508 fused_ordering(33) 00:14:21.508 fused_ordering(34) 00:14:21.508 fused_ordering(35) 00:14:21.508 fused_ordering(36) 00:14:21.508 fused_ordering(37) 00:14:21.508 fused_ordering(38) 00:14:21.508 fused_ordering(39) 00:14:21.508 fused_ordering(40) 00:14:21.508 fused_ordering(41) 00:14:21.508 fused_ordering(42) 00:14:21.508 fused_ordering(43) 00:14:21.508 fused_ordering(44) 00:14:21.508 fused_ordering(45) 00:14:21.508 fused_ordering(46) 00:14:21.508 fused_ordering(47) 00:14:21.508 fused_ordering(48) 00:14:21.508 fused_ordering(49) 00:14:21.508 fused_ordering(50) 00:14:21.508 fused_ordering(51) 00:14:21.508 fused_ordering(52) 00:14:21.508 fused_ordering(53) 00:14:21.508 fused_ordering(54) 00:14:21.508 fused_ordering(55) 00:14:21.508 fused_ordering(56) 00:14:21.508 fused_ordering(57) 00:14:21.508 fused_ordering(58) 00:14:21.508 fused_ordering(59) 00:14:21.508 fused_ordering(60) 00:14:21.508 fused_ordering(61) 00:14:21.508 fused_ordering(62) 00:14:21.508 fused_ordering(63) 00:14:21.508 fused_ordering(64) 00:14:21.508 fused_ordering(65) 00:14:21.508 fused_ordering(66) 00:14:21.508 fused_ordering(67) 00:14:21.508 fused_ordering(68) 00:14:21.508 fused_ordering(69) 00:14:21.508 fused_ordering(70) 00:14:21.508 fused_ordering(71) 00:14:21.508 fused_ordering(72) 00:14:21.508 fused_ordering(73) 00:14:21.508 fused_ordering(74) 00:14:21.508 fused_ordering(75) 00:14:21.508 fused_ordering(76) 00:14:21.508 fused_ordering(77) 00:14:21.508 fused_ordering(78) 00:14:21.508 fused_ordering(79) 00:14:21.508 fused_ordering(80) 00:14:21.508 fused_ordering(81) 00:14:21.508 fused_ordering(82) 00:14:21.508 fused_ordering(83) 00:14:21.508 fused_ordering(84) 00:14:21.508 fused_ordering(85) 00:14:21.508 fused_ordering(86) 00:14:21.508 fused_ordering(87) 00:14:21.508 fused_ordering(88) 00:14:21.508 fused_ordering(89) 00:14:21.508 fused_ordering(90) 00:14:21.508 fused_ordering(91) 00:14:21.508 fused_ordering(92) 00:14:21.508 fused_ordering(93) 00:14:21.508 fused_ordering(94) 00:14:21.508 fused_ordering(95) 00:14:21.508 fused_ordering(96) 00:14:21.508 fused_ordering(97) 00:14:21.508 fused_ordering(98) 00:14:21.508 fused_ordering(99) 00:14:21.508 fused_ordering(100) 00:14:21.508 fused_ordering(101) 00:14:21.508 fused_ordering(102) 00:14:21.508 fused_ordering(103) 00:14:21.508 fused_ordering(104) 00:14:21.508 fused_ordering(105) 00:14:21.508 fused_ordering(106) 00:14:21.508 fused_ordering(107) 00:14:21.508 fused_ordering(108) 00:14:21.508 fused_ordering(109) 00:14:21.508 fused_ordering(110) 00:14:21.508 fused_ordering(111) 00:14:21.508 fused_ordering(112) 00:14:21.508 fused_ordering(113) 00:14:21.508 fused_ordering(114) 00:14:21.508 fused_ordering(115) 00:14:21.508 fused_ordering(116) 00:14:21.508 fused_ordering(117) 00:14:21.508 fused_ordering(118) 00:14:21.508 fused_ordering(119) 00:14:21.508 fused_ordering(120) 00:14:21.508 fused_ordering(121) 00:14:21.508 fused_ordering(122) 00:14:21.508 fused_ordering(123) 00:14:21.508 fused_ordering(124) 00:14:21.508 fused_ordering(125) 00:14:21.508 fused_ordering(126) 00:14:21.508 fused_ordering(127) 00:14:21.508 fused_ordering(128) 00:14:21.508 fused_ordering(129) 00:14:21.508 fused_ordering(130) 00:14:21.508 fused_ordering(131) 00:14:21.508 fused_ordering(132) 00:14:21.508 fused_ordering(133) 00:14:21.508 fused_ordering(134) 00:14:21.508 fused_ordering(135) 00:14:21.508 fused_ordering(136) 00:14:21.508 fused_ordering(137) 00:14:21.508 fused_ordering(138) 00:14:21.508 fused_ordering(139) 00:14:21.508 fused_ordering(140) 00:14:21.508 fused_ordering(141) 00:14:21.508 fused_ordering(142) 00:14:21.508 fused_ordering(143) 00:14:21.508 fused_ordering(144) 00:14:21.508 fused_ordering(145) 00:14:21.508 fused_ordering(146) 00:14:21.508 fused_ordering(147) 00:14:21.508 fused_ordering(148) 00:14:21.508 fused_ordering(149) 00:14:21.508 fused_ordering(150) 00:14:21.508 fused_ordering(151) 00:14:21.508 fused_ordering(152) 00:14:21.508 fused_ordering(153) 00:14:21.508 fused_ordering(154) 00:14:21.508 fused_ordering(155) 00:14:21.508 fused_ordering(156) 00:14:21.508 fused_ordering(157) 00:14:21.508 fused_ordering(158) 00:14:21.508 fused_ordering(159) 00:14:21.508 fused_ordering(160) 00:14:21.508 fused_ordering(161) 00:14:21.508 fused_ordering(162) 00:14:21.508 fused_ordering(163) 00:14:21.508 fused_ordering(164) 00:14:21.508 fused_ordering(165) 00:14:21.508 fused_ordering(166) 00:14:21.508 fused_ordering(167) 00:14:21.508 fused_ordering(168) 00:14:21.508 fused_ordering(169) 00:14:21.508 fused_ordering(170) 00:14:21.508 fused_ordering(171) 00:14:21.508 fused_ordering(172) 00:14:21.508 fused_ordering(173) 00:14:21.508 fused_ordering(174) 00:14:21.508 fused_ordering(175) 00:14:21.508 fused_ordering(176) 00:14:21.508 fused_ordering(177) 00:14:21.508 fused_ordering(178) 00:14:21.508 fused_ordering(179) 00:14:21.508 fused_ordering(180) 00:14:21.508 fused_ordering(181) 00:14:21.508 fused_ordering(182) 00:14:21.508 fused_ordering(183) 00:14:21.508 fused_ordering(184) 00:14:21.508 fused_ordering(185) 00:14:21.508 fused_ordering(186) 00:14:21.508 fused_ordering(187) 00:14:21.508 fused_ordering(188) 00:14:21.508 fused_ordering(189) 00:14:21.508 fused_ordering(190) 00:14:21.508 fused_ordering(191) 00:14:21.508 fused_ordering(192) 00:14:21.508 fused_ordering(193) 00:14:21.508 fused_ordering(194) 00:14:21.508 fused_ordering(195) 00:14:21.508 fused_ordering(196) 00:14:21.508 fused_ordering(197) 00:14:21.508 fused_ordering(198) 00:14:21.508 fused_ordering(199) 00:14:21.508 fused_ordering(200) 00:14:21.508 fused_ordering(201) 00:14:21.508 fused_ordering(202) 00:14:21.508 fused_ordering(203) 00:14:21.508 fused_ordering(204) 00:14:21.508 fused_ordering(205) 00:14:21.766 fused_ordering(206) 00:14:21.766 fused_ordering(207) 00:14:21.766 fused_ordering(208) 00:14:21.766 fused_ordering(209) 00:14:21.766 fused_ordering(210) 00:14:21.766 fused_ordering(211) 00:14:21.766 fused_ordering(212) 00:14:21.766 fused_ordering(213) 00:14:21.766 fused_ordering(214) 00:14:21.766 fused_ordering(215) 00:14:21.766 fused_ordering(216) 00:14:21.766 fused_ordering(217) 00:14:21.766 fused_ordering(218) 00:14:21.766 fused_ordering(219) 00:14:21.766 fused_ordering(220) 00:14:21.766 fused_ordering(221) 00:14:21.766 fused_ordering(222) 00:14:21.766 fused_ordering(223) 00:14:21.766 fused_ordering(224) 00:14:21.766 fused_ordering(225) 00:14:21.766 fused_ordering(226) 00:14:21.766 fused_ordering(227) 00:14:21.766 fused_ordering(228) 00:14:21.766 fused_ordering(229) 00:14:21.766 fused_ordering(230) 00:14:21.766 fused_ordering(231) 00:14:21.766 fused_ordering(232) 00:14:21.766 fused_ordering(233) 00:14:21.766 fused_ordering(234) 00:14:21.766 fused_ordering(235) 00:14:21.766 fused_ordering(236) 00:14:21.766 fused_ordering(237) 00:14:21.766 fused_ordering(238) 00:14:21.766 fused_ordering(239) 00:14:21.766 fused_ordering(240) 00:14:21.766 fused_ordering(241) 00:14:21.766 fused_ordering(242) 00:14:21.766 fused_ordering(243) 00:14:21.766 fused_ordering(244) 00:14:21.766 fused_ordering(245) 00:14:21.766 fused_ordering(246) 00:14:21.766 fused_ordering(247) 00:14:21.766 fused_ordering(248) 00:14:21.766 fused_ordering(249) 00:14:21.766 fused_ordering(250) 00:14:21.766 fused_ordering(251) 00:14:21.766 fused_ordering(252) 00:14:21.766 fused_ordering(253) 00:14:21.766 fused_ordering(254) 00:14:21.766 fused_ordering(255) 00:14:21.766 fused_ordering(256) 00:14:21.766 fused_ordering(257) 00:14:21.766 fused_ordering(258) 00:14:21.766 fused_ordering(259) 00:14:21.766 fused_ordering(260) 00:14:21.766 fused_ordering(261) 00:14:21.766 fused_ordering(262) 00:14:21.766 fused_ordering(263) 00:14:21.766 fused_ordering(264) 00:14:21.766 fused_ordering(265) 00:14:21.766 fused_ordering(266) 00:14:21.766 fused_ordering(267) 00:14:21.766 fused_ordering(268) 00:14:21.766 fused_ordering(269) 00:14:21.766 fused_ordering(270) 00:14:21.766 fused_ordering(271) 00:14:21.766 fused_ordering(272) 00:14:21.766 fused_ordering(273) 00:14:21.766 fused_ordering(274) 00:14:21.766 fused_ordering(275) 00:14:21.766 fused_ordering(276) 00:14:21.766 fused_ordering(277) 00:14:21.766 fused_ordering(278) 00:14:21.766 fused_ordering(279) 00:14:21.766 fused_ordering(280) 00:14:21.766 fused_ordering(281) 00:14:21.766 fused_ordering(282) 00:14:21.766 fused_ordering(283) 00:14:21.766 fused_ordering(284) 00:14:21.766 fused_ordering(285) 00:14:21.766 fused_ordering(286) 00:14:21.766 fused_ordering(287) 00:14:21.766 fused_ordering(288) 00:14:21.766 fused_ordering(289) 00:14:21.766 fused_ordering(290) 00:14:21.766 fused_ordering(291) 00:14:21.766 fused_ordering(292) 00:14:21.766 fused_ordering(293) 00:14:21.766 fused_ordering(294) 00:14:21.766 fused_ordering(295) 00:14:21.766 fused_ordering(296) 00:14:21.766 fused_ordering(297) 00:14:21.766 fused_ordering(298) 00:14:21.766 fused_ordering(299) 00:14:21.766 fused_ordering(300) 00:14:21.766 fused_ordering(301) 00:14:21.766 fused_ordering(302) 00:14:21.766 fused_ordering(303) 00:14:21.766 fused_ordering(304) 00:14:21.766 fused_ordering(305) 00:14:21.766 fused_ordering(306) 00:14:21.766 fused_ordering(307) 00:14:21.766 fused_ordering(308) 00:14:21.766 fused_ordering(309) 00:14:21.766 fused_ordering(310) 00:14:21.766 fused_ordering(311) 00:14:21.766 fused_ordering(312) 00:14:21.766 fused_ordering(313) 00:14:21.766 fused_ordering(314) 00:14:21.766 fused_ordering(315) 00:14:21.766 fused_ordering(316) 00:14:21.766 fused_ordering(317) 00:14:21.766 fused_ordering(318) 00:14:21.766 fused_ordering(319) 00:14:21.766 fused_ordering(320) 00:14:21.766 fused_ordering(321) 00:14:21.766 fused_ordering(322) 00:14:21.766 fused_ordering(323) 00:14:21.766 fused_ordering(324) 00:14:21.766 fused_ordering(325) 00:14:21.766 fused_ordering(326) 00:14:21.766 fused_ordering(327) 00:14:21.766 fused_ordering(328) 00:14:21.766 fused_ordering(329) 00:14:21.766 fused_ordering(330) 00:14:21.766 fused_ordering(331) 00:14:21.766 fused_ordering(332) 00:14:21.766 fused_ordering(333) 00:14:21.766 fused_ordering(334) 00:14:21.766 fused_ordering(335) 00:14:21.766 fused_ordering(336) 00:14:21.766 fused_ordering(337) 00:14:21.766 fused_ordering(338) 00:14:21.766 fused_ordering(339) 00:14:21.766 fused_ordering(340) 00:14:21.766 fused_ordering(341) 00:14:21.766 fused_ordering(342) 00:14:21.766 fused_ordering(343) 00:14:21.766 fused_ordering(344) 00:14:21.766 fused_ordering(345) 00:14:21.766 fused_ordering(346) 00:14:21.766 fused_ordering(347) 00:14:21.766 fused_ordering(348) 00:14:21.766 fused_ordering(349) 00:14:21.766 fused_ordering(350) 00:14:21.766 fused_ordering(351) 00:14:21.766 fused_ordering(352) 00:14:21.766 fused_ordering(353) 00:14:21.766 fused_ordering(354) 00:14:21.766 fused_ordering(355) 00:14:21.766 fused_ordering(356) 00:14:21.766 fused_ordering(357) 00:14:21.766 fused_ordering(358) 00:14:21.766 fused_ordering(359) 00:14:21.766 fused_ordering(360) 00:14:21.766 fused_ordering(361) 00:14:21.767 fused_ordering(362) 00:14:21.767 fused_ordering(363) 00:14:21.767 fused_ordering(364) 00:14:21.767 fused_ordering(365) 00:14:21.767 fused_ordering(366) 00:14:21.767 fused_ordering(367) 00:14:21.767 fused_ordering(368) 00:14:21.767 fused_ordering(369) 00:14:21.767 fused_ordering(370) 00:14:21.767 fused_ordering(371) 00:14:21.767 fused_ordering(372) 00:14:21.767 fused_ordering(373) 00:14:21.767 fused_ordering(374) 00:14:21.767 fused_ordering(375) 00:14:21.767 fused_ordering(376) 00:14:21.767 fused_ordering(377) 00:14:21.767 fused_ordering(378) 00:14:21.767 fused_ordering(379) 00:14:21.767 fused_ordering(380) 00:14:21.767 fused_ordering(381) 00:14:21.767 fused_ordering(382) 00:14:21.767 fused_ordering(383) 00:14:21.767 fused_ordering(384) 00:14:21.767 fused_ordering(385) 00:14:21.767 fused_ordering(386) 00:14:21.767 fused_ordering(387) 00:14:21.767 fused_ordering(388) 00:14:21.767 fused_ordering(389) 00:14:21.767 fused_ordering(390) 00:14:21.767 fused_ordering(391) 00:14:21.767 fused_ordering(392) 00:14:21.767 fused_ordering(393) 00:14:21.767 fused_ordering(394) 00:14:21.767 fused_ordering(395) 00:14:21.767 fused_ordering(396) 00:14:21.767 fused_ordering(397) 00:14:21.767 fused_ordering(398) 00:14:21.767 fused_ordering(399) 00:14:21.767 fused_ordering(400) 00:14:21.767 fused_ordering(401) 00:14:21.767 fused_ordering(402) 00:14:21.767 fused_ordering(403) 00:14:21.767 fused_ordering(404) 00:14:21.767 fused_ordering(405) 00:14:21.767 fused_ordering(406) 00:14:21.767 fused_ordering(407) 00:14:21.767 fused_ordering(408) 00:14:21.767 fused_ordering(409) 00:14:21.767 fused_ordering(410) 00:14:22.333 fused_ordering(411) 00:14:22.333 fused_ordering(412) 00:14:22.333 fused_ordering(413) 00:14:22.333 fused_ordering(414) 00:14:22.333 fused_ordering(415) 00:14:22.333 fused_ordering(416) 00:14:22.333 fused_ordering(417) 00:14:22.333 fused_ordering(418) 00:14:22.333 fused_ordering(419) 00:14:22.333 fused_ordering(420) 00:14:22.333 fused_ordering(421) 00:14:22.333 fused_ordering(422) 00:14:22.333 fused_ordering(423) 00:14:22.333 fused_ordering(424) 00:14:22.333 fused_ordering(425) 00:14:22.333 fused_ordering(426) 00:14:22.333 fused_ordering(427) 00:14:22.333 fused_ordering(428) 00:14:22.333 fused_ordering(429) 00:14:22.333 fused_ordering(430) 00:14:22.333 fused_ordering(431) 00:14:22.333 fused_ordering(432) 00:14:22.333 fused_ordering(433) 00:14:22.333 fused_ordering(434) 00:14:22.333 fused_ordering(435) 00:14:22.333 fused_ordering(436) 00:14:22.333 fused_ordering(437) 00:14:22.333 fused_ordering(438) 00:14:22.333 fused_ordering(439) 00:14:22.333 fused_ordering(440) 00:14:22.333 fused_ordering(441) 00:14:22.333 fused_ordering(442) 00:14:22.333 fused_ordering(443) 00:14:22.333 fused_ordering(444) 00:14:22.333 fused_ordering(445) 00:14:22.333 fused_ordering(446) 00:14:22.333 fused_ordering(447) 00:14:22.333 fused_ordering(448) 00:14:22.333 fused_ordering(449) 00:14:22.333 fused_ordering(450) 00:14:22.333 fused_ordering(451) 00:14:22.333 fused_ordering(452) 00:14:22.333 fused_ordering(453) 00:14:22.333 fused_ordering(454) 00:14:22.333 fused_ordering(455) 00:14:22.333 fused_ordering(456) 00:14:22.333 fused_ordering(457) 00:14:22.333 fused_ordering(458) 00:14:22.333 fused_ordering(459) 00:14:22.333 fused_ordering(460) 00:14:22.333 fused_ordering(461) 00:14:22.333 fused_ordering(462) 00:14:22.333 fused_ordering(463) 00:14:22.333 fused_ordering(464) 00:14:22.333 fused_ordering(465) 00:14:22.333 fused_ordering(466) 00:14:22.333 fused_ordering(467) 00:14:22.333 fused_ordering(468) 00:14:22.333 fused_ordering(469) 00:14:22.333 fused_ordering(470) 00:14:22.333 fused_ordering(471) 00:14:22.333 fused_ordering(472) 00:14:22.333 fused_ordering(473) 00:14:22.333 fused_ordering(474) 00:14:22.333 fused_ordering(475) 00:14:22.333 fused_ordering(476) 00:14:22.333 fused_ordering(477) 00:14:22.333 fused_ordering(478) 00:14:22.333 fused_ordering(479) 00:14:22.333 fused_ordering(480) 00:14:22.333 fused_ordering(481) 00:14:22.333 fused_ordering(482) 00:14:22.333 fused_ordering(483) 00:14:22.333 fused_ordering(484) 00:14:22.333 fused_ordering(485) 00:14:22.333 fused_ordering(486) 00:14:22.333 fused_ordering(487) 00:14:22.333 fused_ordering(488) 00:14:22.333 fused_ordering(489) 00:14:22.333 fused_ordering(490) 00:14:22.333 fused_ordering(491) 00:14:22.333 fused_ordering(492) 00:14:22.333 fused_ordering(493) 00:14:22.333 fused_ordering(494) 00:14:22.333 fused_ordering(495) 00:14:22.333 fused_ordering(496) 00:14:22.333 fused_ordering(497) 00:14:22.333 fused_ordering(498) 00:14:22.333 fused_ordering(499) 00:14:22.333 fused_ordering(500) 00:14:22.333 fused_ordering(501) 00:14:22.333 fused_ordering(502) 00:14:22.333 fused_ordering(503) 00:14:22.333 fused_ordering(504) 00:14:22.333 fused_ordering(505) 00:14:22.333 fused_ordering(506) 00:14:22.333 fused_ordering(507) 00:14:22.333 fused_ordering(508) 00:14:22.333 fused_ordering(509) 00:14:22.333 fused_ordering(510) 00:14:22.333 fused_ordering(511) 00:14:22.333 fused_ordering(512) 00:14:22.333 fused_ordering(513) 00:14:22.333 fused_ordering(514) 00:14:22.333 fused_ordering(515) 00:14:22.333 fused_ordering(516) 00:14:22.333 fused_ordering(517) 00:14:22.333 fused_ordering(518) 00:14:22.333 fused_ordering(519) 00:14:22.333 fused_ordering(520) 00:14:22.333 fused_ordering(521) 00:14:22.333 fused_ordering(522) 00:14:22.333 fused_ordering(523) 00:14:22.333 fused_ordering(524) 00:14:22.333 fused_ordering(525) 00:14:22.333 fused_ordering(526) 00:14:22.333 fused_ordering(527) 00:14:22.333 fused_ordering(528) 00:14:22.333 fused_ordering(529) 00:14:22.333 fused_ordering(530) 00:14:22.333 fused_ordering(531) 00:14:22.333 fused_ordering(532) 00:14:22.333 fused_ordering(533) 00:14:22.333 fused_ordering(534) 00:14:22.333 fused_ordering(535) 00:14:22.333 fused_ordering(536) 00:14:22.333 fused_ordering(537) 00:14:22.333 fused_ordering(538) 00:14:22.333 fused_ordering(539) 00:14:22.333 fused_ordering(540) 00:14:22.333 fused_ordering(541) 00:14:22.333 fused_ordering(542) 00:14:22.333 fused_ordering(543) 00:14:22.333 fused_ordering(544) 00:14:22.333 fused_ordering(545) 00:14:22.333 fused_ordering(546) 00:14:22.333 fused_ordering(547) 00:14:22.333 fused_ordering(548) 00:14:22.333 fused_ordering(549) 00:14:22.333 fused_ordering(550) 00:14:22.333 fused_ordering(551) 00:14:22.333 fused_ordering(552) 00:14:22.333 fused_ordering(553) 00:14:22.333 fused_ordering(554) 00:14:22.333 fused_ordering(555) 00:14:22.333 fused_ordering(556) 00:14:22.333 fused_ordering(557) 00:14:22.333 fused_ordering(558) 00:14:22.333 fused_ordering(559) 00:14:22.333 fused_ordering(560) 00:14:22.333 fused_ordering(561) 00:14:22.333 fused_ordering(562) 00:14:22.333 fused_ordering(563) 00:14:22.333 fused_ordering(564) 00:14:22.333 fused_ordering(565) 00:14:22.333 fused_ordering(566) 00:14:22.333 fused_ordering(567) 00:14:22.333 fused_ordering(568) 00:14:22.333 fused_ordering(569) 00:14:22.333 fused_ordering(570) 00:14:22.333 fused_ordering(571) 00:14:22.333 fused_ordering(572) 00:14:22.333 fused_ordering(573) 00:14:22.333 fused_ordering(574) 00:14:22.333 fused_ordering(575) 00:14:22.333 fused_ordering(576) 00:14:22.333 fused_ordering(577) 00:14:22.333 fused_ordering(578) 00:14:22.333 fused_ordering(579) 00:14:22.333 fused_ordering(580) 00:14:22.333 fused_ordering(581) 00:14:22.333 fused_ordering(582) 00:14:22.333 fused_ordering(583) 00:14:22.333 fused_ordering(584) 00:14:22.333 fused_ordering(585) 00:14:22.334 fused_ordering(586) 00:14:22.334 fused_ordering(587) 00:14:22.334 fused_ordering(588) 00:14:22.334 fused_ordering(589) 00:14:22.334 fused_ordering(590) 00:14:22.334 fused_ordering(591) 00:14:22.334 fused_ordering(592) 00:14:22.334 fused_ordering(593) 00:14:22.334 fused_ordering(594) 00:14:22.334 fused_ordering(595) 00:14:22.334 fused_ordering(596) 00:14:22.334 fused_ordering(597) 00:14:22.334 fused_ordering(598) 00:14:22.334 fused_ordering(599) 00:14:22.334 fused_ordering(600) 00:14:22.334 fused_ordering(601) 00:14:22.334 fused_ordering(602) 00:14:22.334 fused_ordering(603) 00:14:22.334 fused_ordering(604) 00:14:22.334 fused_ordering(605) 00:14:22.334 fused_ordering(606) 00:14:22.334 fused_ordering(607) 00:14:22.334 fused_ordering(608) 00:14:22.334 fused_ordering(609) 00:14:22.334 fused_ordering(610) 00:14:22.334 fused_ordering(611) 00:14:22.334 fused_ordering(612) 00:14:22.334 fused_ordering(613) 00:14:22.334 fused_ordering(614) 00:14:22.334 fused_ordering(615) 00:14:22.593 fused_ordering(616) 00:14:22.593 fused_ordering(617) 00:14:22.593 fused_ordering(618) 00:14:22.593 fused_ordering(619) 00:14:22.593 fused_ordering(620) 00:14:22.593 fused_ordering(621) 00:14:22.593 fused_ordering(622) 00:14:22.593 fused_ordering(623) 00:14:22.593 fused_ordering(624) 00:14:22.593 fused_ordering(625) 00:14:22.593 fused_ordering(626) 00:14:22.593 fused_ordering(627) 00:14:22.593 fused_ordering(628) 00:14:22.593 fused_ordering(629) 00:14:22.593 fused_ordering(630) 00:14:22.593 fused_ordering(631) 00:14:22.593 fused_ordering(632) 00:14:22.593 fused_ordering(633) 00:14:22.593 fused_ordering(634) 00:14:22.593 fused_ordering(635) 00:14:22.593 fused_ordering(636) 00:14:22.593 fused_ordering(637) 00:14:22.593 fused_ordering(638) 00:14:22.593 fused_ordering(639) 00:14:22.593 fused_ordering(640) 00:14:22.593 fused_ordering(641) 00:14:22.593 fused_ordering(642) 00:14:22.593 fused_ordering(643) 00:14:22.593 fused_ordering(644) 00:14:22.593 fused_ordering(645) 00:14:22.593 fused_ordering(646) 00:14:22.593 fused_ordering(647) 00:14:22.593 fused_ordering(648) 00:14:22.593 fused_ordering(649) 00:14:22.593 fused_ordering(650) 00:14:22.593 fused_ordering(651) 00:14:22.593 fused_ordering(652) 00:14:22.593 fused_ordering(653) 00:14:22.593 fused_ordering(654) 00:14:22.593 fused_ordering(655) 00:14:22.593 fused_ordering(656) 00:14:22.593 fused_ordering(657) 00:14:22.593 fused_ordering(658) 00:14:22.593 fused_ordering(659) 00:14:22.593 fused_ordering(660) 00:14:22.593 fused_ordering(661) 00:14:22.593 fused_ordering(662) 00:14:22.593 fused_ordering(663) 00:14:22.593 fused_ordering(664) 00:14:22.593 fused_ordering(665) 00:14:22.593 fused_ordering(666) 00:14:22.593 fused_ordering(667) 00:14:22.593 fused_ordering(668) 00:14:22.593 fused_ordering(669) 00:14:22.593 fused_ordering(670) 00:14:22.593 fused_ordering(671) 00:14:22.593 fused_ordering(672) 00:14:22.593 fused_ordering(673) 00:14:22.593 fused_ordering(674) 00:14:22.593 fused_ordering(675) 00:14:22.593 fused_ordering(676) 00:14:22.593 fused_ordering(677) 00:14:22.593 fused_ordering(678) 00:14:22.593 fused_ordering(679) 00:14:22.593 fused_ordering(680) 00:14:22.593 fused_ordering(681) 00:14:22.593 fused_ordering(682) 00:14:22.593 fused_ordering(683) 00:14:22.593 fused_ordering(684) 00:14:22.593 fused_ordering(685) 00:14:22.593 fused_ordering(686) 00:14:22.593 fused_ordering(687) 00:14:22.593 fused_ordering(688) 00:14:22.593 fused_ordering(689) 00:14:22.593 fused_ordering(690) 00:14:22.593 fused_ordering(691) 00:14:22.593 fused_ordering(692) 00:14:22.593 fused_ordering(693) 00:14:22.593 fused_ordering(694) 00:14:22.593 fused_ordering(695) 00:14:22.593 fused_ordering(696) 00:14:22.593 fused_ordering(697) 00:14:22.593 fused_ordering(698) 00:14:22.593 fused_ordering(699) 00:14:22.593 fused_ordering(700) 00:14:22.593 fused_ordering(701) 00:14:22.593 fused_ordering(702) 00:14:22.593 fused_ordering(703) 00:14:22.593 fused_ordering(704) 00:14:22.593 fused_ordering(705) 00:14:22.593 fused_ordering(706) 00:14:22.593 fused_ordering(707) 00:14:22.593 fused_ordering(708) 00:14:22.593 fused_ordering(709) 00:14:22.593 fused_ordering(710) 00:14:22.593 fused_ordering(711) 00:14:22.593 fused_ordering(712) 00:14:22.593 fused_ordering(713) 00:14:22.593 fused_ordering(714) 00:14:22.593 fused_ordering(715) 00:14:22.593 fused_ordering(716) 00:14:22.593 fused_ordering(717) 00:14:22.593 fused_ordering(718) 00:14:22.593 fused_ordering(719) 00:14:22.593 fused_ordering(720) 00:14:22.593 fused_ordering(721) 00:14:22.593 fused_ordering(722) 00:14:22.593 fused_ordering(723) 00:14:22.593 fused_ordering(724) 00:14:22.593 fused_ordering(725) 00:14:22.593 fused_ordering(726) 00:14:22.593 fused_ordering(727) 00:14:22.593 fused_ordering(728) 00:14:22.593 fused_ordering(729) 00:14:22.593 fused_ordering(730) 00:14:22.593 fused_ordering(731) 00:14:22.593 fused_ordering(732) 00:14:22.593 fused_ordering(733) 00:14:22.593 fused_ordering(734) 00:14:22.593 fused_ordering(735) 00:14:22.593 fused_ordering(736) 00:14:22.593 fused_ordering(737) 00:14:22.593 fused_ordering(738) 00:14:22.593 fused_ordering(739) 00:14:22.593 fused_ordering(740) 00:14:22.593 fused_ordering(741) 00:14:22.593 fused_ordering(742) 00:14:22.593 fused_ordering(743) 00:14:22.593 fused_ordering(744) 00:14:22.593 fused_ordering(745) 00:14:22.593 fused_ordering(746) 00:14:22.593 fused_ordering(747) 00:14:22.593 fused_ordering(748) 00:14:22.593 fused_ordering(749) 00:14:22.593 fused_ordering(750) 00:14:22.593 fused_ordering(751) 00:14:22.593 fused_ordering(752) 00:14:22.593 fused_ordering(753) 00:14:22.593 fused_ordering(754) 00:14:22.593 fused_ordering(755) 00:14:22.593 fused_ordering(756) 00:14:22.593 fused_ordering(757) 00:14:22.593 fused_ordering(758) 00:14:22.593 fused_ordering(759) 00:14:22.593 fused_ordering(760) 00:14:22.593 fused_ordering(761) 00:14:22.593 fused_ordering(762) 00:14:22.593 fused_ordering(763) 00:14:22.593 fused_ordering(764) 00:14:22.593 fused_ordering(765) 00:14:22.593 fused_ordering(766) 00:14:22.593 fused_ordering(767) 00:14:22.593 fused_ordering(768) 00:14:22.593 fused_ordering(769) 00:14:22.593 fused_ordering(770) 00:14:22.593 fused_ordering(771) 00:14:22.593 fused_ordering(772) 00:14:22.593 fused_ordering(773) 00:14:22.593 fused_ordering(774) 00:14:22.593 fused_ordering(775) 00:14:22.593 fused_ordering(776) 00:14:22.593 fused_ordering(777) 00:14:22.593 fused_ordering(778) 00:14:22.593 fused_ordering(779) 00:14:22.593 fused_ordering(780) 00:14:22.593 fused_ordering(781) 00:14:22.593 fused_ordering(782) 00:14:22.593 fused_ordering(783) 00:14:22.593 fused_ordering(784) 00:14:22.593 fused_ordering(785) 00:14:22.593 fused_ordering(786) 00:14:22.593 fused_ordering(787) 00:14:22.593 fused_ordering(788) 00:14:22.593 fused_ordering(789) 00:14:22.593 fused_ordering(790) 00:14:22.593 fused_ordering(791) 00:14:22.593 fused_ordering(792) 00:14:22.593 fused_ordering(793) 00:14:22.593 fused_ordering(794) 00:14:22.593 fused_ordering(795) 00:14:22.593 fused_ordering(796) 00:14:22.593 fused_ordering(797) 00:14:22.593 fused_ordering(798) 00:14:22.593 fused_ordering(799) 00:14:22.593 fused_ordering(800) 00:14:22.593 fused_ordering(801) 00:14:22.593 fused_ordering(802) 00:14:22.593 fused_ordering(803) 00:14:22.593 fused_ordering(804) 00:14:22.593 fused_ordering(805) 00:14:22.593 fused_ordering(806) 00:14:22.593 fused_ordering(807) 00:14:22.593 fused_ordering(808) 00:14:22.593 fused_ordering(809) 00:14:22.593 fused_ordering(810) 00:14:22.593 fused_ordering(811) 00:14:22.593 fused_ordering(812) 00:14:22.593 fused_ordering(813) 00:14:22.593 fused_ordering(814) 00:14:22.593 fused_ordering(815) 00:14:22.593 fused_ordering(816) 00:14:22.593 fused_ordering(817) 00:14:22.593 fused_ordering(818) 00:14:22.593 fused_ordering(819) 00:14:22.593 fused_ordering(820) 00:14:23.161 fused_ordering(821) 00:14:23.161 fused_ordering(822) 00:14:23.161 fused_ordering(823) 00:14:23.161 fused_ordering(824) 00:14:23.161 fused_ordering(825) 00:14:23.161 fused_ordering(826) 00:14:23.161 fused_ordering(827) 00:14:23.161 fused_ordering(828) 00:14:23.161 fused_ordering(829) 00:14:23.161 fused_ordering(830) 00:14:23.161 fused_ordering(831) 00:14:23.161 fused_ordering(832) 00:14:23.161 fused_ordering(833) 00:14:23.161 fused_ordering(834) 00:14:23.161 fused_ordering(835) 00:14:23.161 fused_ordering(836) 00:14:23.161 fused_ordering(837) 00:14:23.161 fused_ordering(838) 00:14:23.161 fused_ordering(839) 00:14:23.161 fused_ordering(840) 00:14:23.161 fused_ordering(841) 00:14:23.161 fused_ordering(842) 00:14:23.161 fused_ordering(843) 00:14:23.161 fused_ordering(844) 00:14:23.161 fused_ordering(845) 00:14:23.161 fused_ordering(846) 00:14:23.161 fused_ordering(847) 00:14:23.161 fused_ordering(848) 00:14:23.161 fused_ordering(849) 00:14:23.161 fused_ordering(850) 00:14:23.161 fused_ordering(851) 00:14:23.161 fused_ordering(852) 00:14:23.161 fused_ordering(853) 00:14:23.161 fused_ordering(854) 00:14:23.161 fused_ordering(855) 00:14:23.161 fused_ordering(856) 00:14:23.161 fused_ordering(857) 00:14:23.161 fused_ordering(858) 00:14:23.161 fused_ordering(859) 00:14:23.161 fused_ordering(860) 00:14:23.161 fused_ordering(861) 00:14:23.161 fused_ordering(862) 00:14:23.161 fused_ordering(863) 00:14:23.161 fused_ordering(864) 00:14:23.161 fused_ordering(865) 00:14:23.161 fused_ordering(866) 00:14:23.161 fused_ordering(867) 00:14:23.161 fused_ordering(868) 00:14:23.161 fused_ordering(869) 00:14:23.161 fused_ordering(870) 00:14:23.161 fused_ordering(871) 00:14:23.161 fused_ordering(872) 00:14:23.161 fused_ordering(873) 00:14:23.161 fused_ordering(874) 00:14:23.161 fused_ordering(875) 00:14:23.161 fused_ordering(876) 00:14:23.161 fused_ordering(877) 00:14:23.161 fused_ordering(878) 00:14:23.161 fused_ordering(879) 00:14:23.161 fused_ordering(880) 00:14:23.161 fused_ordering(881) 00:14:23.161 fused_ordering(882) 00:14:23.161 fused_ordering(883) 00:14:23.161 fused_ordering(884) 00:14:23.161 fused_ordering(885) 00:14:23.161 fused_ordering(886) 00:14:23.161 fused_ordering(887) 00:14:23.161 fused_ordering(888) 00:14:23.161 fused_ordering(889) 00:14:23.161 fused_ordering(890) 00:14:23.161 fused_ordering(891) 00:14:23.161 fused_ordering(892) 00:14:23.161 fused_ordering(893) 00:14:23.161 fused_ordering(894) 00:14:23.161 fused_ordering(895) 00:14:23.161 fused_ordering(896) 00:14:23.161 fused_ordering(897) 00:14:23.161 fused_ordering(898) 00:14:23.161 fused_ordering(899) 00:14:23.161 fused_ordering(900) 00:14:23.161 fused_ordering(901) 00:14:23.161 fused_ordering(902) 00:14:23.161 fused_ordering(903) 00:14:23.161 fused_ordering(904) 00:14:23.161 fused_ordering(905) 00:14:23.161 fused_ordering(906) 00:14:23.161 fused_ordering(907) 00:14:23.161 fused_ordering(908) 00:14:23.161 fused_ordering(909) 00:14:23.161 fused_ordering(910) 00:14:23.161 fused_ordering(911) 00:14:23.161 fused_ordering(912) 00:14:23.161 fused_ordering(913) 00:14:23.161 fused_ordering(914) 00:14:23.161 fused_ordering(915) 00:14:23.161 fused_ordering(916) 00:14:23.161 fused_ordering(917) 00:14:23.161 fused_ordering(918) 00:14:23.161 fused_ordering(919) 00:14:23.161 fused_ordering(920) 00:14:23.161 fused_ordering(921) 00:14:23.161 fused_ordering(922) 00:14:23.161 fused_ordering(923) 00:14:23.161 fused_ordering(924) 00:14:23.161 fused_ordering(925) 00:14:23.161 fused_ordering(926) 00:14:23.161 fused_ordering(927) 00:14:23.161 fused_ordering(928) 00:14:23.161 fused_ordering(929) 00:14:23.161 fused_ordering(930) 00:14:23.161 fused_ordering(931) 00:14:23.161 fused_ordering(932) 00:14:23.161 fused_ordering(933) 00:14:23.161 fused_ordering(934) 00:14:23.161 fused_ordering(935) 00:14:23.161 fused_ordering(936) 00:14:23.161 fused_ordering(937) 00:14:23.161 fused_ordering(938) 00:14:23.161 fused_ordering(939) 00:14:23.161 fused_ordering(940) 00:14:23.161 fused_ordering(941) 00:14:23.161 fused_ordering(942) 00:14:23.161 fused_ordering(943) 00:14:23.161 fused_ordering(944) 00:14:23.161 fused_ordering(945) 00:14:23.161 fused_ordering(946) 00:14:23.161 fused_ordering(947) 00:14:23.161 fused_ordering(948) 00:14:23.161 fused_ordering(949) 00:14:23.161 fused_ordering(950) 00:14:23.161 fused_ordering(951) 00:14:23.161 fused_ordering(952) 00:14:23.161 fused_ordering(953) 00:14:23.161 fused_ordering(954) 00:14:23.161 fused_ordering(955) 00:14:23.161 fused_ordering(956) 00:14:23.161 fused_ordering(957) 00:14:23.161 fused_ordering(958) 00:14:23.161 fused_ordering(959) 00:14:23.161 fused_ordering(960) 00:14:23.161 fused_ordering(961) 00:14:23.161 fused_ordering(962) 00:14:23.161 fused_ordering(963) 00:14:23.161 fused_ordering(964) 00:14:23.161 fused_ordering(965) 00:14:23.161 fused_ordering(966) 00:14:23.161 fused_ordering(967) 00:14:23.161 fused_ordering(968) 00:14:23.161 fused_ordering(969) 00:14:23.161 fused_ordering(970) 00:14:23.161 fused_ordering(971) 00:14:23.161 fused_ordering(972) 00:14:23.161 fused_ordering(973) 00:14:23.161 fused_ordering(974) 00:14:23.161 fused_ordering(975) 00:14:23.161 fused_ordering(976) 00:14:23.161 fused_ordering(977) 00:14:23.161 fused_ordering(978) 00:14:23.161 fused_ordering(979) 00:14:23.161 fused_ordering(980) 00:14:23.161 fused_ordering(981) 00:14:23.161 fused_ordering(982) 00:14:23.161 fused_ordering(983) 00:14:23.161 fused_ordering(984) 00:14:23.161 fused_ordering(985) 00:14:23.161 fused_ordering(986) 00:14:23.161 fused_ordering(987) 00:14:23.161 fused_ordering(988) 00:14:23.161 fused_ordering(989) 00:14:23.161 fused_ordering(990) 00:14:23.161 fused_ordering(991) 00:14:23.161 fused_ordering(992) 00:14:23.161 fused_ordering(993) 00:14:23.161 fused_ordering(994) 00:14:23.161 fused_ordering(995) 00:14:23.161 fused_ordering(996) 00:14:23.161 fused_ordering(997) 00:14:23.162 fused_ordering(998) 00:14:23.162 fused_ordering(999) 00:14:23.162 fused_ordering(1000) 00:14:23.162 fused_ordering(1001) 00:14:23.162 fused_ordering(1002) 00:14:23.162 fused_ordering(1003) 00:14:23.162 fused_ordering(1004) 00:14:23.162 fused_ordering(1005) 00:14:23.162 fused_ordering(1006) 00:14:23.162 fused_ordering(1007) 00:14:23.162 fused_ordering(1008) 00:14:23.162 fused_ordering(1009) 00:14:23.162 fused_ordering(1010) 00:14:23.162 fused_ordering(1011) 00:14:23.162 fused_ordering(1012) 00:14:23.162 fused_ordering(1013) 00:14:23.162 fused_ordering(1014) 00:14:23.162 fused_ordering(1015) 00:14:23.162 fused_ordering(1016) 00:14:23.162 fused_ordering(1017) 00:14:23.162 fused_ordering(1018) 00:14:23.162 fused_ordering(1019) 00:14:23.162 fused_ordering(1020) 00:14:23.162 fused_ordering(1021) 00:14:23.162 fused_ordering(1022) 00:14:23.162 fused_ordering(1023) 00:14:23.162 14:20:28 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:23.162 14:20:28 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:23.162 14:20:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:23.162 14:20:28 -- nvmf/common.sh@116 -- # sync 00:14:23.162 14:20:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:23.162 14:20:28 -- nvmf/common.sh@119 -- # set +e 00:14:23.162 14:20:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:23.162 14:20:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:23.162 rmmod nvme_tcp 00:14:23.162 rmmod nvme_fabrics 00:14:23.162 rmmod nvme_keyring 00:14:23.162 14:20:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:23.162 14:20:28 -- nvmf/common.sh@123 -- # set -e 00:14:23.162 14:20:28 -- nvmf/common.sh@124 -- # return 0 00:14:23.162 14:20:28 -- nvmf/common.sh@477 -- # '[' -n 82237 ']' 00:14:23.162 14:20:28 -- nvmf/common.sh@478 -- # killprocess 82237 00:14:23.162 14:20:28 -- common/autotest_common.sh@936 -- # '[' -z 82237 ']' 00:14:23.162 14:20:28 -- common/autotest_common.sh@940 -- # kill -0 82237 00:14:23.162 14:20:28 -- common/autotest_common.sh@941 -- # uname 00:14:23.162 14:20:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.162 14:20:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82237 00:14:23.162 killing process with pid 82237 00:14:23.162 14:20:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:23.162 14:20:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:23.162 14:20:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82237' 00:14:23.162 14:20:28 -- common/autotest_common.sh@955 -- # kill 82237 00:14:23.162 14:20:28 -- common/autotest_common.sh@960 -- # wait 82237 00:14:23.420 14:20:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:23.420 14:20:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:23.420 14:20:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:23.420 14:20:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.420 14:20:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:23.420 14:20:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.420 14:20:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.420 14:20:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.420 14:20:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:23.420 ************************************ 00:14:23.420 END TEST nvmf_fused_ordering 00:14:23.420 ************************************ 00:14:23.420 00:14:23.420 real 0m3.946s 00:14:23.420 user 0m4.469s 00:14:23.420 sys 0m1.446s 00:14:23.420 14:20:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:23.420 14:20:28 -- common/autotest_common.sh@10 -- # set +x 00:14:23.420 14:20:28 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:23.420 14:20:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:23.420 14:20:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.420 14:20:28 -- common/autotest_common.sh@10 -- # set +x 00:14:23.420 ************************************ 00:14:23.420 START TEST nvmf_delete_subsystem 00:14:23.420 ************************************ 00:14:23.420 14:20:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:23.420 * Looking for test storage... 00:14:23.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.420 14:20:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:23.420 14:20:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:23.420 14:20:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:23.680 14:20:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:23.680 14:20:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:23.680 14:20:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:23.680 14:20:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:23.680 14:20:29 -- scripts/common.sh@335 -- # IFS=.-: 00:14:23.680 14:20:29 -- scripts/common.sh@335 -- # read -ra ver1 00:14:23.680 14:20:29 -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.680 14:20:29 -- scripts/common.sh@336 -- # read -ra ver2 00:14:23.680 14:20:29 -- scripts/common.sh@337 -- # local 'op=<' 00:14:23.680 14:20:29 -- scripts/common.sh@339 -- # ver1_l=2 00:14:23.680 14:20:29 -- scripts/common.sh@340 -- # ver2_l=1 00:14:23.680 14:20:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:23.680 14:20:29 -- scripts/common.sh@343 -- # case "$op" in 00:14:23.680 14:20:29 -- scripts/common.sh@344 -- # : 1 00:14:23.680 14:20:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:23.680 14:20:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.680 14:20:29 -- scripts/common.sh@364 -- # decimal 1 00:14:23.680 14:20:29 -- scripts/common.sh@352 -- # local d=1 00:14:23.680 14:20:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.680 14:20:29 -- scripts/common.sh@354 -- # echo 1 00:14:23.680 14:20:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:23.680 14:20:29 -- scripts/common.sh@365 -- # decimal 2 00:14:23.680 14:20:29 -- scripts/common.sh@352 -- # local d=2 00:14:23.680 14:20:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.680 14:20:29 -- scripts/common.sh@354 -- # echo 2 00:14:23.680 14:20:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:23.680 14:20:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:23.680 14:20:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:23.680 14:20:29 -- scripts/common.sh@367 -- # return 0 00:14:23.680 14:20:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.680 14:20:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.680 --rc genhtml_branch_coverage=1 00:14:23.680 --rc genhtml_function_coverage=1 00:14:23.680 --rc genhtml_legend=1 00:14:23.680 --rc geninfo_all_blocks=1 00:14:23.680 --rc geninfo_unexecuted_blocks=1 00:14:23.680 00:14:23.680 ' 00:14:23.680 14:20:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.680 --rc genhtml_branch_coverage=1 00:14:23.680 --rc genhtml_function_coverage=1 00:14:23.680 --rc genhtml_legend=1 00:14:23.680 --rc geninfo_all_blocks=1 00:14:23.680 --rc geninfo_unexecuted_blocks=1 00:14:23.680 00:14:23.680 ' 00:14:23.680 14:20:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.680 --rc genhtml_branch_coverage=1 00:14:23.680 --rc genhtml_function_coverage=1 00:14:23.680 --rc genhtml_legend=1 00:14:23.680 --rc geninfo_all_blocks=1 00:14:23.680 --rc geninfo_unexecuted_blocks=1 00:14:23.680 00:14:23.680 ' 00:14:23.680 14:20:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.680 --rc genhtml_branch_coverage=1 00:14:23.680 --rc genhtml_function_coverage=1 00:14:23.680 --rc genhtml_legend=1 00:14:23.680 --rc geninfo_all_blocks=1 00:14:23.680 --rc geninfo_unexecuted_blocks=1 00:14:23.680 00:14:23.680 ' 00:14:23.680 14:20:29 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.680 14:20:29 -- nvmf/common.sh@7 -- # uname -s 00:14:23.680 14:20:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.680 14:20:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.680 14:20:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.680 14:20:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.680 14:20:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.680 14:20:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.680 14:20:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.680 14:20:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.680 14:20:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.680 14:20:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.680 14:20:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:23.680 14:20:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:23.680 14:20:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.680 14:20:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.680 14:20:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.680 14:20:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.680 14:20:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.680 14:20:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.680 14:20:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.680 14:20:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.680 14:20:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.680 14:20:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.680 14:20:29 -- paths/export.sh@5 -- # export PATH 00:14:23.680 14:20:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.680 14:20:29 -- nvmf/common.sh@46 -- # : 0 00:14:23.680 14:20:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.680 14:20:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.680 14:20:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.680 14:20:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.680 14:20:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.680 14:20:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.680 14:20:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.680 14:20:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.680 14:20:29 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:23.680 14:20:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.680 14:20:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.680 14:20:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.680 14:20:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.680 14:20:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.680 14:20:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.680 14:20:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.680 14:20:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.680 14:20:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:23.680 14:20:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:23.680 14:20:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:23.680 14:20:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:23.680 14:20:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:23.680 14:20:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:23.680 14:20:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.680 14:20:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.680 14:20:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:23.680 14:20:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:23.680 14:20:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.680 14:20:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.680 14:20:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.680 14:20:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.680 14:20:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.680 14:20:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.680 14:20:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.680 14:20:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.680 14:20:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:23.680 14:20:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:23.680 Cannot find device "nvmf_tgt_br" 00:14:23.680 14:20:29 -- nvmf/common.sh@154 -- # true 00:14:23.680 14:20:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.681 Cannot find device "nvmf_tgt_br2" 00:14:23.681 14:20:29 -- nvmf/common.sh@155 -- # true 00:14:23.681 14:20:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:23.681 14:20:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:23.681 Cannot find device "nvmf_tgt_br" 00:14:23.681 14:20:29 -- nvmf/common.sh@157 -- # true 00:14:23.681 14:20:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:23.681 Cannot find device "nvmf_tgt_br2" 00:14:23.681 14:20:29 -- nvmf/common.sh@158 -- # true 00:14:23.681 14:20:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:23.681 14:20:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:23.681 14:20:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.681 14:20:29 -- nvmf/common.sh@161 -- # true 00:14:23.681 14:20:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.681 14:20:29 -- nvmf/common.sh@162 -- # true 00:14:23.681 14:20:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.681 14:20:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.681 14:20:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.681 14:20:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.681 14:20:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.939 14:20:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.939 14:20:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.939 14:20:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.939 14:20:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.939 14:20:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:23.939 14:20:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:23.939 14:20:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:23.939 14:20:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:23.939 14:20:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.939 14:20:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.939 14:20:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.939 14:20:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:23.939 14:20:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:23.939 14:20:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.939 14:20:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.939 14:20:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.939 14:20:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.939 14:20:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.939 14:20:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:23.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:23.939 00:14:23.939 --- 10.0.0.2 ping statistics --- 00:14:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.939 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:23.939 14:20:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:23.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:23.939 00:14:23.939 --- 10.0.0.3 ping statistics --- 00:14:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.939 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:23.939 14:20:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:23.939 00:14:23.939 --- 10.0.0.1 ping statistics --- 00:14:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.939 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:23.939 14:20:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.939 14:20:29 -- nvmf/common.sh@421 -- # return 0 00:14:23.939 14:20:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:23.939 14:20:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.939 14:20:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:23.939 14:20:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:23.939 14:20:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.939 14:20:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:23.939 14:20:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:23.939 14:20:29 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:23.939 14:20:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.939 14:20:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.939 14:20:29 -- common/autotest_common.sh@10 -- # set +x 00:14:23.939 14:20:29 -- nvmf/common.sh@469 -- # nvmfpid=82507 00:14:23.939 14:20:29 -- nvmf/common.sh@470 -- # waitforlisten 82507 00:14:23.939 14:20:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:23.939 14:20:29 -- common/autotest_common.sh@829 -- # '[' -z 82507 ']' 00:14:23.939 14:20:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.939 14:20:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.939 14:20:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.939 14:20:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.939 14:20:29 -- common/autotest_common.sh@10 -- # set +x 00:14:23.939 [2024-12-05 14:20:29.545382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:23.939 [2024-12-05 14:20:29.545447] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.197 [2024-12-05 14:20:29.675252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:24.197 [2024-12-05 14:20:29.750800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:24.197 [2024-12-05 14:20:29.750966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.197 [2024-12-05 14:20:29.750978] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.197 [2024-12-05 14:20:29.750986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.197 [2024-12-05 14:20:29.751143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.197 [2024-12-05 14:20:29.751342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.133 14:20:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.133 14:20:30 -- common/autotest_common.sh@862 -- # return 0 00:14:25.133 14:20:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:25.133 14:20:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 14:20:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.133 14:20:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 [2024-12-05 14:20:30.564575] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.133 14:20:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.133 14:20:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 14:20:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.133 14:20:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 [2024-12-05 14:20:30.580753] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.133 14:20:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.133 14:20:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 NULL1 00:14:25.133 14:20:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:25.133 14:20:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 Delay0 00:14:25.133 14:20:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.133 14:20:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.133 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:14:25.133 14:20:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@28 -- # perf_pid=82558 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:25.133 14:20:30 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:25.391 [2024-12-05 14:20:30.785293] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:27.295 14:20:32 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.295 14:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.295 14:20:32 -- common/autotest_common.sh@10 -- # set +x 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 starting I/O failed: -6 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Write completed with error (sct=0, sc=8) 00:14:27.295 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 [2024-12-05 14:20:32.830556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826870 is same with the state(5) to be set 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Write completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 Read completed with error (sct=0, sc=8) 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:27.296 starting I/O failed: -6 00:14:28.234 [2024-12-05 14:20:33.799207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825070 is same with the state(5) to be set 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 [2024-12-05 14:20:33.827494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1827120 is same with the state(5) to be set 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 [2024-12-05 14:20:33.828160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826bc0 is same with the state(5) to be set 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 [2024-12-05 14:20:33.828930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a4400bf20 is same with the state(5) to be set 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Write completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 Read completed with error (sct=0, sc=8) 00:14:28.234 [2024-12-05 14:20:33.830492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a4400c480 is same with the state(5) to be set 00:14:28.234 [2024-12-05 14:20:33.831336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1825070 (9): Bad file descriptor 00:14:28.234 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:28.234 14:20:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.234 14:20:33 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:28.234 14:20:33 -- target/delete_subsystem.sh@35 -- # kill -0 82558 00:14:28.234 14:20:33 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:28.234 Initializing NVMe Controllers 00:14:28.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:28.235 Controller IO queue size 128, less than required. 00:14:28.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:28.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:28.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:28.235 Initialization complete. Launching workers. 00:14:28.235 ======================================================== 00:14:28.235 Latency(us) 00:14:28.235 Device Information : IOPS MiB/s Average min max 00:14:28.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.10 0.09 902760.17 802.95 1016817.25 00:14:28.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.17 0.09 929816.06 708.19 1019800.72 00:14:28.235 ======================================================== 00:14:28.235 Total : 363.26 0.18 916030.09 708.19 1019800.72 00:14:28.235 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@35 -- # kill -0 82558 00:14:28.803 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82558) - No such process 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@45 -- # NOT wait 82558 00:14:28.803 14:20:34 -- common/autotest_common.sh@650 -- # local es=0 00:14:28.803 14:20:34 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82558 00:14:28.803 14:20:34 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:28.803 14:20:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.803 14:20:34 -- common/autotest_common.sh@642 -- # type -t wait 00:14:28.803 14:20:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.803 14:20:34 -- common/autotest_common.sh@653 -- # wait 82558 00:14:28.803 14:20:34 -- common/autotest_common.sh@653 -- # es=1 00:14:28.803 14:20:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.803 14:20:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.803 14:20:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:28.803 14:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.803 14:20:34 -- common/autotest_common.sh@10 -- # set +x 00:14:28.803 14:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.803 14:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.803 14:20:34 -- common/autotest_common.sh@10 -- # set +x 00:14:28.803 [2024-12-05 14:20:34.356941] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.803 14:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.803 14:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.803 14:20:34 -- common/autotest_common.sh@10 -- # set +x 00:14:28.803 14:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@54 -- # perf_pid=82598 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:28.803 14:20:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.062 [2024-12-05 14:20:34.523868] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:29.320 14:20:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.320 14:20:34 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:29.320 14:20:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.886 14:20:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.886 14:20:35 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:29.886 14:20:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.452 14:20:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.452 14:20:35 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:30.452 14:20:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.019 14:20:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.019 14:20:36 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:31.019 14:20:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.276 14:20:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.276 14:20:36 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:31.276 14:20:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.841 14:20:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.841 14:20:37 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:31.841 14:20:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:32.099 Initializing NVMe Controllers 00:14:32.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.099 Controller IO queue size 128, less than required. 00:14:32.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:32.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:32.099 Initialization complete. Launching workers. 00:14:32.099 ======================================================== 00:14:32.099 Latency(us) 00:14:32.099 Device Information : IOPS MiB/s Average min max 00:14:32.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004165.70 1000160.53 1015926.15 00:14:32.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006653.58 1000414.81 1018633.86 00:14:32.099 ======================================================== 00:14:32.099 Total : 256.00 0.12 1005409.64 1000160.53 1018633.86 00:14:32.099 00:14:32.357 14:20:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:32.357 14:20:37 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:32.357 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82598) - No such process 00:14:32.357 14:20:37 -- target/delete_subsystem.sh@67 -- # wait 82598 00:14:32.357 14:20:37 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:32.357 14:20:37 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:32.357 14:20:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.357 14:20:37 -- nvmf/common.sh@116 -- # sync 00:14:32.357 14:20:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.357 14:20:37 -- nvmf/common.sh@119 -- # set +e 00:14:32.357 14:20:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.357 14:20:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.357 rmmod nvme_tcp 00:14:32.357 rmmod nvme_fabrics 00:14:32.616 rmmod nvme_keyring 00:14:32.616 14:20:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:32.616 14:20:38 -- nvmf/common.sh@123 -- # set -e 00:14:32.616 14:20:38 -- nvmf/common.sh@124 -- # return 0 00:14:32.616 14:20:38 -- nvmf/common.sh@477 -- # '[' -n 82507 ']' 00:14:32.617 14:20:38 -- nvmf/common.sh@478 -- # killprocess 82507 00:14:32.617 14:20:38 -- common/autotest_common.sh@936 -- # '[' -z 82507 ']' 00:14:32.617 14:20:38 -- common/autotest_common.sh@940 -- # kill -0 82507 00:14:32.617 14:20:38 -- common/autotest_common.sh@941 -- # uname 00:14:32.617 14:20:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:32.617 14:20:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82507 00:14:32.617 killing process with pid 82507 00:14:32.617 14:20:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:32.617 14:20:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:32.617 14:20:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82507' 00:14:32.617 14:20:38 -- common/autotest_common.sh@955 -- # kill 82507 00:14:32.617 14:20:38 -- common/autotest_common.sh@960 -- # wait 82507 00:14:32.876 14:20:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.876 14:20:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.876 14:20:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.876 14:20:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.876 14:20:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.876 14:20:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.876 14:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.876 14:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.876 14:20:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:32.876 ************************************ 00:14:32.876 END TEST nvmf_delete_subsystem 00:14:32.876 ************************************ 00:14:32.876 00:14:32.876 real 0m9.407s 00:14:32.876 user 0m29.298s 00:14:32.876 sys 0m1.167s 00:14:32.876 14:20:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.876 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 14:20:38 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:32.876 14:20:38 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:32.876 14:20:38 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:32.876 14:20:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:32.876 14:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.876 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 ************************************ 00:14:32.876 START TEST nvmf_host_management 00:14:32.876 ************************************ 00:14:32.876 14:20:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:32.876 * Looking for test storage... 00:14:32.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:32.876 14:20:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:32.876 14:20:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:32.876 14:20:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:33.135 14:20:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:33.135 14:20:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:33.135 14:20:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:33.135 14:20:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:33.135 14:20:38 -- scripts/common.sh@335 -- # IFS=.-: 00:14:33.135 14:20:38 -- scripts/common.sh@335 -- # read -ra ver1 00:14:33.135 14:20:38 -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.135 14:20:38 -- scripts/common.sh@336 -- # read -ra ver2 00:14:33.135 14:20:38 -- scripts/common.sh@337 -- # local 'op=<' 00:14:33.135 14:20:38 -- scripts/common.sh@339 -- # ver1_l=2 00:14:33.135 14:20:38 -- scripts/common.sh@340 -- # ver2_l=1 00:14:33.135 14:20:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:33.135 14:20:38 -- scripts/common.sh@343 -- # case "$op" in 00:14:33.135 14:20:38 -- scripts/common.sh@344 -- # : 1 00:14:33.135 14:20:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:33.135 14:20:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.135 14:20:38 -- scripts/common.sh@364 -- # decimal 1 00:14:33.135 14:20:38 -- scripts/common.sh@352 -- # local d=1 00:14:33.135 14:20:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.135 14:20:38 -- scripts/common.sh@354 -- # echo 1 00:14:33.135 14:20:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:33.135 14:20:38 -- scripts/common.sh@365 -- # decimal 2 00:14:33.135 14:20:38 -- scripts/common.sh@352 -- # local d=2 00:14:33.135 14:20:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.135 14:20:38 -- scripts/common.sh@354 -- # echo 2 00:14:33.135 14:20:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:33.135 14:20:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:33.135 14:20:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:33.135 14:20:38 -- scripts/common.sh@367 -- # return 0 00:14:33.135 14:20:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.135 14:20:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.135 --rc genhtml_branch_coverage=1 00:14:33.135 --rc genhtml_function_coverage=1 00:14:33.135 --rc genhtml_legend=1 00:14:33.135 --rc geninfo_all_blocks=1 00:14:33.135 --rc geninfo_unexecuted_blocks=1 00:14:33.135 00:14:33.135 ' 00:14:33.135 14:20:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.135 --rc genhtml_branch_coverage=1 00:14:33.135 --rc genhtml_function_coverage=1 00:14:33.135 --rc genhtml_legend=1 00:14:33.135 --rc geninfo_all_blocks=1 00:14:33.135 --rc geninfo_unexecuted_blocks=1 00:14:33.135 00:14:33.135 ' 00:14:33.135 14:20:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.135 --rc genhtml_branch_coverage=1 00:14:33.135 --rc genhtml_function_coverage=1 00:14:33.135 --rc genhtml_legend=1 00:14:33.135 --rc geninfo_all_blocks=1 00:14:33.135 --rc geninfo_unexecuted_blocks=1 00:14:33.135 00:14:33.135 ' 00:14:33.135 14:20:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.135 --rc genhtml_branch_coverage=1 00:14:33.135 --rc genhtml_function_coverage=1 00:14:33.135 --rc genhtml_legend=1 00:14:33.135 --rc geninfo_all_blocks=1 00:14:33.135 --rc geninfo_unexecuted_blocks=1 00:14:33.135 00:14:33.135 ' 00:14:33.135 14:20:38 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.135 14:20:38 -- nvmf/common.sh@7 -- # uname -s 00:14:33.135 14:20:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.135 14:20:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.135 14:20:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.135 14:20:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.135 14:20:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.135 14:20:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.135 14:20:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.135 14:20:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.135 14:20:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.135 14:20:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.135 14:20:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:33.135 14:20:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:33.135 14:20:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.135 14:20:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.135 14:20:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.135 14:20:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.135 14:20:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.135 14:20:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.135 14:20:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.135 14:20:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.135 14:20:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.135 14:20:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.135 14:20:38 -- paths/export.sh@5 -- # export PATH 00:14:33.135 14:20:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.135 14:20:38 -- nvmf/common.sh@46 -- # : 0 00:14:33.135 14:20:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.135 14:20:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.135 14:20:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.135 14:20:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.135 14:20:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.135 14:20:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.135 14:20:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.135 14:20:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.136 14:20:38 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.136 14:20:38 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.136 14:20:38 -- target/host_management.sh@104 -- # nvmftestinit 00:14:33.136 14:20:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.136 14:20:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.136 14:20:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.136 14:20:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.136 14:20:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.136 14:20:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.136 14:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.136 14:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.136 14:20:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:33.136 14:20:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:33.136 14:20:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:33.136 14:20:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:33.136 14:20:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:33.136 14:20:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:33.136 14:20:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.136 14:20:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.136 14:20:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.136 14:20:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:33.136 14:20:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.136 14:20:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.136 14:20:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.136 14:20:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.136 14:20:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.136 14:20:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.136 14:20:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.136 14:20:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.136 14:20:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:33.136 14:20:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:33.136 Cannot find device "nvmf_tgt_br" 00:14:33.136 14:20:38 -- nvmf/common.sh@154 -- # true 00:14:33.136 14:20:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.136 Cannot find device "nvmf_tgt_br2" 00:14:33.136 14:20:38 -- nvmf/common.sh@155 -- # true 00:14:33.136 14:20:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:33.136 14:20:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:33.136 Cannot find device "nvmf_tgt_br" 00:14:33.136 14:20:38 -- nvmf/common.sh@157 -- # true 00:14:33.136 14:20:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:33.136 Cannot find device "nvmf_tgt_br2" 00:14:33.136 14:20:38 -- nvmf/common.sh@158 -- # true 00:14:33.136 14:20:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:33.136 14:20:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:33.136 14:20:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.394 14:20:38 -- nvmf/common.sh@161 -- # true 00:14:33.394 14:20:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.394 14:20:38 -- nvmf/common.sh@162 -- # true 00:14:33.394 14:20:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.394 14:20:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.394 14:20:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.395 14:20:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.395 14:20:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.395 14:20:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.395 14:20:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.395 14:20:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.395 14:20:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.395 14:20:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:33.395 14:20:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:33.395 14:20:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:33.395 14:20:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:33.395 14:20:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.395 14:20:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.395 14:20:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.395 14:20:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:33.395 14:20:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:33.395 14:20:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.395 14:20:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.395 14:20:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.395 14:20:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.395 14:20:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.395 14:20:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:33.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:33.395 00:14:33.395 --- 10.0.0.2 ping statistics --- 00:14:33.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.395 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:33.395 14:20:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:33.395 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.395 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:33.395 00:14:33.395 --- 10.0.0.3 ping statistics --- 00:14:33.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.395 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:33.395 14:20:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:33.395 00:14:33.395 --- 10.0.0.1 ping statistics --- 00:14:33.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.395 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:33.395 14:20:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.395 14:20:38 -- nvmf/common.sh@421 -- # return 0 00:14:33.395 14:20:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.395 14:20:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.395 14:20:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:33.395 14:20:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:33.395 14:20:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.395 14:20:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:33.395 14:20:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:33.395 14:20:38 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:33.395 14:20:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.395 14:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.395 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:14:33.395 ************************************ 00:14:33.395 START TEST nvmf_host_management 00:14:33.395 ************************************ 00:14:33.395 14:20:38 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:33.395 14:20:38 -- target/host_management.sh@69 -- # starttarget 00:14:33.395 14:20:38 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:33.395 14:20:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:33.395 14:20:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.395 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:14:33.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.395 14:20:38 -- nvmf/common.sh@469 -- # nvmfpid=82843 00:14:33.395 14:20:38 -- nvmf/common.sh@470 -- # waitforlisten 82843 00:14:33.395 14:20:38 -- common/autotest_common.sh@829 -- # '[' -z 82843 ']' 00:14:33.395 14:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.395 14:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.395 14:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.395 14:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.395 14:20:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:33.395 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:14:33.395 [2024-12-05 14:20:39.039089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.395 [2024-12-05 14:20:39.039175] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.705 [2024-12-05 14:20:39.178362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.705 [2024-12-05 14:20:39.239227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.705 [2024-12-05 14:20:39.239679] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.705 [2024-12-05 14:20:39.239734] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.705 [2024-12-05 14:20:39.239885] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.705 [2024-12-05 14:20:39.240258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.705 [2024-12-05 14:20:39.240343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.705 [2024-12-05 14:20:39.240475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:33.705 [2024-12-05 14:20:39.240482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.661 14:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.661 14:20:39 -- common/autotest_common.sh@862 -- # return 0 00:14:34.661 14:20:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:34.661 14:20:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.661 14:20:39 -- common/autotest_common.sh@10 -- # set +x 00:14:34.661 14:20:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.661 14:20:40 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:34.661 14:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.661 14:20:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.661 [2024-12-05 14:20:40.035704] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.661 14:20:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.661 14:20:40 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:34.661 14:20:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.661 14:20:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.661 14:20:40 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:34.661 14:20:40 -- target/host_management.sh@23 -- # cat 00:14:34.661 14:20:40 -- target/host_management.sh@30 -- # rpc_cmd 00:14:34.661 14:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.661 14:20:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.661 Malloc0 00:14:34.661 [2024-12-05 14:20:40.118193] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.661 14:20:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.661 14:20:40 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:34.661 14:20:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.661 14:20:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.661 14:20:40 -- target/host_management.sh@73 -- # perfpid=82915 00:14:34.661 14:20:40 -- target/host_management.sh@74 -- # waitforlisten 82915 /var/tmp/bdevperf.sock 00:14:34.661 14:20:40 -- common/autotest_common.sh@829 -- # '[' -z 82915 ']' 00:14:34.661 14:20:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.661 14:20:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.661 14:20:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.661 14:20:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.661 14:20:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.661 14:20:40 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:34.661 14:20:40 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:34.661 14:20:40 -- nvmf/common.sh@520 -- # config=() 00:14:34.661 14:20:40 -- nvmf/common.sh@520 -- # local subsystem config 00:14:34.661 14:20:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:34.661 14:20:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:34.661 { 00:14:34.661 "params": { 00:14:34.661 "name": "Nvme$subsystem", 00:14:34.661 "trtype": "$TEST_TRANSPORT", 00:14:34.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.661 "adrfam": "ipv4", 00:14:34.661 "trsvcid": "$NVMF_PORT", 00:14:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.661 "hdgst": ${hdgst:-false}, 00:14:34.661 "ddgst": ${ddgst:-false} 00:14:34.661 }, 00:14:34.661 "method": "bdev_nvme_attach_controller" 00:14:34.661 } 00:14:34.661 EOF 00:14:34.661 )") 00:14:34.661 14:20:40 -- nvmf/common.sh@542 -- # cat 00:14:34.661 14:20:40 -- nvmf/common.sh@544 -- # jq . 00:14:34.661 14:20:40 -- nvmf/common.sh@545 -- # IFS=, 00:14:34.661 14:20:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:34.661 "params": { 00:14:34.661 "name": "Nvme0", 00:14:34.661 "trtype": "tcp", 00:14:34.661 "traddr": "10.0.0.2", 00:14:34.661 "adrfam": "ipv4", 00:14:34.661 "trsvcid": "4420", 00:14:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:34.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:34.662 "hdgst": false, 00:14:34.662 "ddgst": false 00:14:34.662 }, 00:14:34.662 "method": "bdev_nvme_attach_controller" 00:14:34.662 }' 00:14:34.662 [2024-12-05 14:20:40.226496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:34.662 [2024-12-05 14:20:40.226576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82915 ] 00:14:34.967 [2024-12-05 14:20:40.371074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.967 [2024-12-05 14:20:40.453246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.231 Running I/O for 10 seconds... 00:14:35.806 14:20:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.806 14:20:41 -- common/autotest_common.sh@862 -- # return 0 00:14:35.806 14:20:41 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:35.806 14:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.806 14:20:41 -- common/autotest_common.sh@10 -- # set +x 00:14:35.806 14:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.806 14:20:41 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.806 14:20:41 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:35.806 14:20:41 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:35.806 14:20:41 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:35.806 14:20:41 -- target/host_management.sh@52 -- # local ret=1 00:14:35.806 14:20:41 -- target/host_management.sh@53 -- # local i 00:14:35.806 14:20:41 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:35.806 14:20:41 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:35.806 14:20:41 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:35.806 14:20:41 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:35.806 14:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.806 14:20:41 -- common/autotest_common.sh@10 -- # set +x 00:14:35.806 14:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.806 14:20:41 -- target/host_management.sh@55 -- # read_io_count=2240 00:14:35.806 14:20:41 -- target/host_management.sh@58 -- # '[' 2240 -ge 100 ']' 00:14:35.806 14:20:41 -- target/host_management.sh@59 -- # ret=0 00:14:35.806 14:20:41 -- target/host_management.sh@60 -- # break 00:14:35.806 14:20:41 -- target/host_management.sh@64 -- # return 0 00:14:35.806 14:20:41 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:35.806 14:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.806 14:20:41 -- common/autotest_common.sh@10 -- # set +x 00:14:35.806 [2024-12-05 14:20:41.296189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:35.806 [2024-12-05 14:20:41.296290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:35.806 [2024-12-05 14:20:41.296301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:35.806 [2024-12-05 14:20:41.296309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:35.806 [2024-12-05 14:20:41.296317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:35.806 [2024-12-05 14:20:41.296328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:35.806 [2024-12-05 14:20:41.298527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.806 [2024-12-05 14:20:41.298890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.806 [2024-12-05 14:20:41.298899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.298908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.298917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.298927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.298935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.298944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.298952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.298961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.298969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.298979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.298987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.298996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.807 [2024-12-05 14:20:41.299601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.807 [2024-12-05 14:20:41.299611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:35.808 [2024-12-05 14:20:41.299753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.808 [2024-12-05 14:20:41.299890] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d3dc0 was disconnected and freed. reset controller. 00:14:35.808 task offset: 49152 on job bdev=Nvme0n1 fails 00:14:35.808 00:14:35.808 Latency(us) 00:14:35.808 [2024-12-05T14:20:41.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.808 [2024-12-05T14:20:41.456Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:35.808 [2024-12-05T14:20:41.456Z] Job: Nvme0n1 ended in about 0.62 seconds with error 00:14:35.808 Verification LBA range: start 0x0 length 0x400 00:14:35.808 Nvme0n1 : 0.62 3928.82 245.55 103.48 0.00 15614.01 1765.00 22878.02 00:14:35.808 [2024-12-05T14:20:41.456Z] =================================================================================================================== 00:14:35.808 [2024-12-05T14:20:41.456Z] Total : 3928.82 245.55 103.48 0.00 15614.01 1765.00 22878.02 00:14:35.808 14:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.808 [2024-12-05 14:20:41.300975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:35.808 14:20:41 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:35.808 14:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.808 14:20:41 -- common/autotest_common.sh@10 -- # set +x 00:14:35.808 [2024-12-05 14:20:41.302649] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:35.808 [2024-12-05 14:20:41.302673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172fa70 (9): Bad file descriptor 00:14:35.808 14:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.808 14:20:41 -- target/host_management.sh@87 -- # sleep 1 00:14:35.808 [2024-12-05 14:20:41.311225] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:36.745 14:20:42 -- target/host_management.sh@91 -- # kill -9 82915 00:14:36.745 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82915) - No such process 00:14:36.745 14:20:42 -- target/host_management.sh@91 -- # true 00:14:36.745 14:20:42 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:36.745 14:20:42 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:36.745 14:20:42 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:36.745 14:20:42 -- nvmf/common.sh@520 -- # config=() 00:14:36.745 14:20:42 -- nvmf/common.sh@520 -- # local subsystem config 00:14:36.745 14:20:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:36.745 14:20:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:36.745 { 00:14:36.745 "params": { 00:14:36.745 "name": "Nvme$subsystem", 00:14:36.745 "trtype": "$TEST_TRANSPORT", 00:14:36.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.745 "adrfam": "ipv4", 00:14:36.745 "trsvcid": "$NVMF_PORT", 00:14:36.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.745 "hdgst": ${hdgst:-false}, 00:14:36.745 "ddgst": ${ddgst:-false} 00:14:36.745 }, 00:14:36.745 "method": "bdev_nvme_attach_controller" 00:14:36.745 } 00:14:36.745 EOF 00:14:36.745 )") 00:14:36.745 14:20:42 -- nvmf/common.sh@542 -- # cat 00:14:36.745 14:20:42 -- nvmf/common.sh@544 -- # jq . 00:14:36.745 14:20:42 -- nvmf/common.sh@545 -- # IFS=, 00:14:36.745 14:20:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:36.745 "params": { 00:14:36.745 "name": "Nvme0", 00:14:36.745 "trtype": "tcp", 00:14:36.745 "traddr": "10.0.0.2", 00:14:36.745 "adrfam": "ipv4", 00:14:36.745 "trsvcid": "4420", 00:14:36.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:36.745 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:36.745 "hdgst": false, 00:14:36.745 "ddgst": false 00:14:36.745 }, 00:14:36.745 "method": "bdev_nvme_attach_controller" 00:14:36.745 }' 00:14:36.745 [2024-12-05 14:20:42.358431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:36.745 [2024-12-05 14:20:42.358493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82965 ] 00:14:37.005 [2024-12-05 14:20:42.490613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.005 [2024-12-05 14:20:42.553332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.264 Running I/O for 1 seconds... 00:14:38.202 00:14:38.202 Latency(us) 00:14:38.202 [2024-12-05T14:20:43.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.202 [2024-12-05T14:20:43.850Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:38.202 Verification LBA range: start 0x0 length 0x400 00:14:38.202 Nvme0n1 : 1.01 3994.47 249.65 0.00 0.00 15761.76 506.41 22639.71 00:14:38.202 [2024-12-05T14:20:43.850Z] =================================================================================================================== 00:14:38.202 [2024-12-05T14:20:43.850Z] Total : 3994.47 249.65 0.00 0.00 15761.76 506.41 22639.71 00:14:38.461 14:20:44 -- target/host_management.sh@101 -- # stoptarget 00:14:38.461 14:20:44 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:38.461 14:20:44 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:38.461 14:20:44 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:38.461 14:20:44 -- target/host_management.sh@40 -- # nvmftestfini 00:14:38.461 14:20:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:38.461 14:20:44 -- nvmf/common.sh@116 -- # sync 00:14:38.721 14:20:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:38.721 14:20:44 -- nvmf/common.sh@119 -- # set +e 00:14:38.721 14:20:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:38.721 14:20:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:38.721 rmmod nvme_tcp 00:14:38.721 rmmod nvme_fabrics 00:14:38.721 rmmod nvme_keyring 00:14:38.721 14:20:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:38.721 14:20:44 -- nvmf/common.sh@123 -- # set -e 00:14:38.721 14:20:44 -- nvmf/common.sh@124 -- # return 0 00:14:38.721 14:20:44 -- nvmf/common.sh@477 -- # '[' -n 82843 ']' 00:14:38.721 14:20:44 -- nvmf/common.sh@478 -- # killprocess 82843 00:14:38.721 14:20:44 -- common/autotest_common.sh@936 -- # '[' -z 82843 ']' 00:14:38.721 14:20:44 -- common/autotest_common.sh@940 -- # kill -0 82843 00:14:38.721 14:20:44 -- common/autotest_common.sh@941 -- # uname 00:14:38.721 14:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.721 14:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82843 00:14:38.721 14:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:38.721 killing process with pid 82843 00:14:38.721 14:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:38.721 14:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82843' 00:14:38.721 14:20:44 -- common/autotest_common.sh@955 -- # kill 82843 00:14:38.721 14:20:44 -- common/autotest_common.sh@960 -- # wait 82843 00:14:38.981 [2024-12-05 14:20:44.425714] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:38.981 14:20:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:38.981 14:20:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:38.981 14:20:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:38.981 14:20:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.981 14:20:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:38.981 14:20:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.981 14:20:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.981 14:20:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.981 14:20:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:38.981 00:14:38.981 real 0m5.512s 00:14:38.981 user 0m23.257s 00:14:38.981 sys 0m1.418s 00:14:38.981 14:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.981 14:20:44 -- common/autotest_common.sh@10 -- # set +x 00:14:38.981 ************************************ 00:14:38.981 END TEST nvmf_host_management 00:14:38.981 ************************************ 00:14:38.981 14:20:44 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:38.981 00:14:38.981 real 0m6.104s 00:14:38.981 user 0m23.459s 00:14:38.981 sys 0m1.681s 00:14:38.981 14:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.981 ************************************ 00:14:38.981 END TEST nvmf_host_management 00:14:38.981 ************************************ 00:14:38.981 14:20:44 -- common/autotest_common.sh@10 -- # set +x 00:14:38.981 14:20:44 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:38.981 14:20:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:38.981 14:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.981 14:20:44 -- common/autotest_common.sh@10 -- # set +x 00:14:38.981 ************************************ 00:14:38.981 START TEST nvmf_lvol 00:14:38.981 ************************************ 00:14:38.981 14:20:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:39.241 * Looking for test storage... 00:14:39.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:39.241 14:20:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:39.241 14:20:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:39.241 14:20:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:39.241 14:20:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:39.241 14:20:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:39.241 14:20:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:39.241 14:20:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:39.241 14:20:44 -- scripts/common.sh@335 -- # IFS=.-: 00:14:39.241 14:20:44 -- scripts/common.sh@335 -- # read -ra ver1 00:14:39.241 14:20:44 -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.241 14:20:44 -- scripts/common.sh@336 -- # read -ra ver2 00:14:39.241 14:20:44 -- scripts/common.sh@337 -- # local 'op=<' 00:14:39.241 14:20:44 -- scripts/common.sh@339 -- # ver1_l=2 00:14:39.241 14:20:44 -- scripts/common.sh@340 -- # ver2_l=1 00:14:39.241 14:20:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:39.241 14:20:44 -- scripts/common.sh@343 -- # case "$op" in 00:14:39.241 14:20:44 -- scripts/common.sh@344 -- # : 1 00:14:39.241 14:20:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:39.241 14:20:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.241 14:20:44 -- scripts/common.sh@364 -- # decimal 1 00:14:39.241 14:20:44 -- scripts/common.sh@352 -- # local d=1 00:14:39.241 14:20:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.241 14:20:44 -- scripts/common.sh@354 -- # echo 1 00:14:39.241 14:20:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:39.241 14:20:44 -- scripts/common.sh@365 -- # decimal 2 00:14:39.241 14:20:44 -- scripts/common.sh@352 -- # local d=2 00:14:39.241 14:20:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.241 14:20:44 -- scripts/common.sh@354 -- # echo 2 00:14:39.241 14:20:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:39.241 14:20:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:39.241 14:20:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:39.241 14:20:44 -- scripts/common.sh@367 -- # return 0 00:14:39.241 14:20:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.241 14:20:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.241 --rc genhtml_branch_coverage=1 00:14:39.241 --rc genhtml_function_coverage=1 00:14:39.242 --rc genhtml_legend=1 00:14:39.242 --rc geninfo_all_blocks=1 00:14:39.242 --rc geninfo_unexecuted_blocks=1 00:14:39.242 00:14:39.242 ' 00:14:39.242 14:20:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.242 --rc genhtml_branch_coverage=1 00:14:39.242 --rc genhtml_function_coverage=1 00:14:39.242 --rc genhtml_legend=1 00:14:39.242 --rc geninfo_all_blocks=1 00:14:39.242 --rc geninfo_unexecuted_blocks=1 00:14:39.242 00:14:39.242 ' 00:14:39.242 14:20:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.242 --rc genhtml_branch_coverage=1 00:14:39.242 --rc genhtml_function_coverage=1 00:14:39.242 --rc genhtml_legend=1 00:14:39.242 --rc geninfo_all_blocks=1 00:14:39.242 --rc geninfo_unexecuted_blocks=1 00:14:39.242 00:14:39.242 ' 00:14:39.242 14:20:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.242 --rc genhtml_branch_coverage=1 00:14:39.242 --rc genhtml_function_coverage=1 00:14:39.242 --rc genhtml_legend=1 00:14:39.242 --rc geninfo_all_blocks=1 00:14:39.242 --rc geninfo_unexecuted_blocks=1 00:14:39.242 00:14:39.242 ' 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.242 14:20:44 -- nvmf/common.sh@7 -- # uname -s 00:14:39.242 14:20:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.242 14:20:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.242 14:20:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.242 14:20:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.242 14:20:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.242 14:20:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.242 14:20:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.242 14:20:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.242 14:20:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.242 14:20:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.242 14:20:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:39.242 14:20:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:39.242 14:20:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.242 14:20:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.242 14:20:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.242 14:20:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.242 14:20:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.242 14:20:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.242 14:20:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.242 14:20:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.242 14:20:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.242 14:20:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.242 14:20:44 -- paths/export.sh@5 -- # export PATH 00:14:39.242 14:20:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.242 14:20:44 -- nvmf/common.sh@46 -- # : 0 00:14:39.242 14:20:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:39.242 14:20:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:39.242 14:20:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:39.242 14:20:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.242 14:20:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.242 14:20:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:39.242 14:20:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:39.242 14:20:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.242 14:20:44 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:39.242 14:20:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:39.242 14:20:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.242 14:20:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:39.242 14:20:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:39.242 14:20:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:39.242 14:20:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.242 14:20:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.242 14:20:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.242 14:20:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:39.242 14:20:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:39.242 14:20:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:39.242 14:20:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:39.242 14:20:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:39.242 14:20:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:39.242 14:20:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.242 14:20:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.242 14:20:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.242 14:20:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:39.242 14:20:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.242 14:20:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.242 14:20:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.242 14:20:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.242 14:20:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.242 14:20:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.242 14:20:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.242 14:20:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.242 14:20:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:39.242 14:20:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:39.242 Cannot find device "nvmf_tgt_br" 00:14:39.242 14:20:44 -- nvmf/common.sh@154 -- # true 00:14:39.242 14:20:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.242 Cannot find device "nvmf_tgt_br2" 00:14:39.242 14:20:44 -- nvmf/common.sh@155 -- # true 00:14:39.242 14:20:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:39.242 14:20:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:39.242 Cannot find device "nvmf_tgt_br" 00:14:39.242 14:20:44 -- nvmf/common.sh@157 -- # true 00:14:39.242 14:20:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:39.242 Cannot find device "nvmf_tgt_br2" 00:14:39.242 14:20:44 -- nvmf/common.sh@158 -- # true 00:14:39.242 14:20:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:39.242 14:20:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:39.502 14:20:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.502 14:20:44 -- nvmf/common.sh@161 -- # true 00:14:39.502 14:20:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.503 14:20:44 -- nvmf/common.sh@162 -- # true 00:14:39.503 14:20:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.503 14:20:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.503 14:20:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.503 14:20:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.503 14:20:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.503 14:20:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.503 14:20:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.503 14:20:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.503 14:20:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.503 14:20:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:39.503 14:20:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:39.503 14:20:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:39.503 14:20:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:39.503 14:20:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.503 14:20:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.503 14:20:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.503 14:20:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:39.503 14:20:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:39.503 14:20:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:39.503 14:20:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:39.503 14:20:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:39.503 14:20:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:39.503 14:20:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:39.503 14:20:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:39.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:39.503 00:14:39.503 --- 10.0.0.2 ping statistics --- 00:14:39.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.503 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:39.503 14:20:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:39.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:39.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:39.503 00:14:39.503 --- 10.0.0.3 ping statistics --- 00:14:39.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.503 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:39.503 14:20:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:39.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:14:39.503 00:14:39.503 --- 10.0.0.1 ping statistics --- 00:14:39.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.503 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:14:39.503 14:20:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.503 14:20:45 -- nvmf/common.sh@421 -- # return 0 00:14:39.503 14:20:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:39.503 14:20:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.503 14:20:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:39.503 14:20:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:39.503 14:20:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.503 14:20:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:39.503 14:20:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:39.503 14:20:45 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:39.503 14:20:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:39.503 14:20:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.503 14:20:45 -- common/autotest_common.sh@10 -- # set +x 00:14:39.503 14:20:45 -- nvmf/common.sh@469 -- # nvmfpid=83206 00:14:39.503 14:20:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:39.503 14:20:45 -- nvmf/common.sh@470 -- # waitforlisten 83206 00:14:39.503 14:20:45 -- common/autotest_common.sh@829 -- # '[' -z 83206 ']' 00:14:39.503 14:20:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.503 14:20:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.503 14:20:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.503 14:20:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.503 14:20:45 -- common/autotest_common.sh@10 -- # set +x 00:14:39.503 [2024-12-05 14:20:45.132587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:39.503 [2024-12-05 14:20:45.132674] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.761 [2024-12-05 14:20:45.274219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.761 [2024-12-05 14:20:45.348110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:39.761 [2024-12-05 14:20:45.348288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.761 [2024-12-05 14:20:45.348302] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.761 [2024-12-05 14:20:45.348311] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.761 [2024-12-05 14:20:45.348469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.762 [2024-12-05 14:20:45.349016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.762 [2024-12-05 14:20:45.349029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.693 14:20:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.693 14:20:46 -- common/autotest_common.sh@862 -- # return 0 00:14:40.693 14:20:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:40.693 14:20:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.694 14:20:46 -- common/autotest_common.sh@10 -- # set +x 00:14:40.694 14:20:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.694 14:20:46 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:40.951 [2024-12-05 14:20:46.363860] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.951 14:20:46 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.209 14:20:46 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:41.209 14:20:46 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.466 14:20:47 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:41.466 14:20:47 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:41.725 14:20:47 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:41.984 14:20:47 -- target/nvmf_lvol.sh@29 -- # lvs=b32a26b1-a448-425d-be45-5e03b30011c3 00:14:41.984 14:20:47 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b32a26b1-a448-425d-be45-5e03b30011c3 lvol 20 00:14:42.243 14:20:47 -- target/nvmf_lvol.sh@32 -- # lvol=46985868-0767-470e-86f7-3d6caeb3d473 00:14:42.243 14:20:47 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:42.501 14:20:48 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46985868-0767-470e-86f7-3d6caeb3d473 00:14:42.759 14:20:48 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.018 [2024-12-05 14:20:48.506594] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.018 14:20:48 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.277 14:20:48 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:43.277 14:20:48 -- target/nvmf_lvol.sh@42 -- # perf_pid=83354 00:14:43.277 14:20:48 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:44.213 14:20:49 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 46985868-0767-470e-86f7-3d6caeb3d473 MY_SNAPSHOT 00:14:44.472 14:20:50 -- target/nvmf_lvol.sh@47 -- # snapshot=45594ad0-8add-4e2b-83c5-af1309765664 00:14:44.472 14:20:50 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 46985868-0767-470e-86f7-3d6caeb3d473 30 00:14:44.730 14:20:50 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 45594ad0-8add-4e2b-83c5-af1309765664 MY_CLONE 00:14:44.989 14:20:50 -- target/nvmf_lvol.sh@49 -- # clone=939730e1-1f12-492f-b7dc-ddb908af7b87 00:14:44.989 14:20:50 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 939730e1-1f12-492f-b7dc-ddb908af7b87 00:14:45.925 14:20:51 -- target/nvmf_lvol.sh@53 -- # wait 83354 00:14:54.039 Initializing NVMe Controllers 00:14:54.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:54.039 Controller IO queue size 128, less than required. 00:14:54.039 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:54.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:54.039 Initialization complete. Launching workers. 00:14:54.039 ======================================================== 00:14:54.039 Latency(us) 00:14:54.040 Device Information : IOPS MiB/s Average min max 00:14:54.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7611.40 29.73 16839.87 2537.24 91335.71 00:14:54.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7857.50 30.69 16309.96 3810.31 79111.10 00:14:54.040 ======================================================== 00:14:54.040 Total : 15468.89 60.43 16570.70 2537.24 91335.71 00:14:54.040 00:14:54.040 14:20:59 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:54.040 14:20:59 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 46985868-0767-470e-86f7-3d6caeb3d473 00:14:54.040 14:20:59 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b32a26b1-a448-425d-be45-5e03b30011c3 00:14:54.299 14:20:59 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:54.299 14:20:59 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:54.299 14:20:59 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:54.299 14:20:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:54.299 14:20:59 -- nvmf/common.sh@116 -- # sync 00:14:54.299 14:20:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:54.299 14:20:59 -- nvmf/common.sh@119 -- # set +e 00:14:54.299 14:20:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:54.299 14:20:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:54.299 rmmod nvme_tcp 00:14:54.299 rmmod nvme_fabrics 00:14:54.299 rmmod nvme_keyring 00:14:54.299 14:20:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:54.299 14:20:59 -- nvmf/common.sh@123 -- # set -e 00:14:54.299 14:20:59 -- nvmf/common.sh@124 -- # return 0 00:14:54.299 14:20:59 -- nvmf/common.sh@477 -- # '[' -n 83206 ']' 00:14:54.299 14:20:59 -- nvmf/common.sh@478 -- # killprocess 83206 00:14:54.299 14:20:59 -- common/autotest_common.sh@936 -- # '[' -z 83206 ']' 00:14:54.299 14:20:59 -- common/autotest_common.sh@940 -- # kill -0 83206 00:14:54.299 14:20:59 -- common/autotest_common.sh@941 -- # uname 00:14:54.299 14:20:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.299 14:20:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83206 00:14:54.299 killing process with pid 83206 00:14:54.299 14:20:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:54.299 14:20:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:54.299 14:20:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83206' 00:14:54.299 14:20:59 -- common/autotest_common.sh@955 -- # kill 83206 00:14:54.299 14:20:59 -- common/autotest_common.sh@960 -- # wait 83206 00:14:54.868 14:21:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:54.868 14:21:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:54.868 14:21:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:54.868 14:21:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.868 14:21:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:54.868 14:21:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.868 14:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.868 14:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.868 14:21:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:54.868 ************************************ 00:14:54.868 END TEST nvmf_lvol 00:14:54.868 ************************************ 00:14:54.868 00:14:54.868 real 0m15.732s 00:14:54.868 user 1m6.316s 00:14:54.868 sys 0m3.074s 00:14:54.868 14:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:54.868 14:21:00 -- common/autotest_common.sh@10 -- # set +x 00:14:54.868 14:21:00 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:54.868 14:21:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:54.868 14:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.868 14:21:00 -- common/autotest_common.sh@10 -- # set +x 00:14:54.868 ************************************ 00:14:54.868 START TEST nvmf_lvs_grow 00:14:54.868 ************************************ 00:14:54.868 14:21:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:54.868 * Looking for test storage... 00:14:54.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:54.868 14:21:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:54.868 14:21:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:54.868 14:21:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:55.128 14:21:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:55.128 14:21:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:55.128 14:21:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:55.128 14:21:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:55.128 14:21:00 -- scripts/common.sh@335 -- # IFS=.-: 00:14:55.128 14:21:00 -- scripts/common.sh@335 -- # read -ra ver1 00:14:55.128 14:21:00 -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.128 14:21:00 -- scripts/common.sh@336 -- # read -ra ver2 00:14:55.128 14:21:00 -- scripts/common.sh@337 -- # local 'op=<' 00:14:55.128 14:21:00 -- scripts/common.sh@339 -- # ver1_l=2 00:14:55.128 14:21:00 -- scripts/common.sh@340 -- # ver2_l=1 00:14:55.128 14:21:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:55.128 14:21:00 -- scripts/common.sh@343 -- # case "$op" in 00:14:55.128 14:21:00 -- scripts/common.sh@344 -- # : 1 00:14:55.128 14:21:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:55.128 14:21:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.128 14:21:00 -- scripts/common.sh@364 -- # decimal 1 00:14:55.128 14:21:00 -- scripts/common.sh@352 -- # local d=1 00:14:55.128 14:21:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.128 14:21:00 -- scripts/common.sh@354 -- # echo 1 00:14:55.128 14:21:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:55.128 14:21:00 -- scripts/common.sh@365 -- # decimal 2 00:14:55.128 14:21:00 -- scripts/common.sh@352 -- # local d=2 00:14:55.128 14:21:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.128 14:21:00 -- scripts/common.sh@354 -- # echo 2 00:14:55.128 14:21:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:55.128 14:21:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:55.128 14:21:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:55.128 14:21:00 -- scripts/common.sh@367 -- # return 0 00:14:55.128 14:21:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.128 14:21:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.128 --rc genhtml_branch_coverage=1 00:14:55.128 --rc genhtml_function_coverage=1 00:14:55.128 --rc genhtml_legend=1 00:14:55.128 --rc geninfo_all_blocks=1 00:14:55.128 --rc geninfo_unexecuted_blocks=1 00:14:55.128 00:14:55.128 ' 00:14:55.128 14:21:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.128 --rc genhtml_branch_coverage=1 00:14:55.128 --rc genhtml_function_coverage=1 00:14:55.128 --rc genhtml_legend=1 00:14:55.128 --rc geninfo_all_blocks=1 00:14:55.128 --rc geninfo_unexecuted_blocks=1 00:14:55.128 00:14:55.128 ' 00:14:55.128 14:21:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.128 --rc genhtml_branch_coverage=1 00:14:55.128 --rc genhtml_function_coverage=1 00:14:55.128 --rc genhtml_legend=1 00:14:55.128 --rc geninfo_all_blocks=1 00:14:55.128 --rc geninfo_unexecuted_blocks=1 00:14:55.128 00:14:55.128 ' 00:14:55.128 14:21:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.128 --rc genhtml_branch_coverage=1 00:14:55.128 --rc genhtml_function_coverage=1 00:14:55.128 --rc genhtml_legend=1 00:14:55.128 --rc geninfo_all_blocks=1 00:14:55.128 --rc geninfo_unexecuted_blocks=1 00:14:55.128 00:14:55.128 ' 00:14:55.128 14:21:00 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.128 14:21:00 -- nvmf/common.sh@7 -- # uname -s 00:14:55.128 14:21:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.128 14:21:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.128 14:21:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.128 14:21:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.128 14:21:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.129 14:21:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.129 14:21:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.129 14:21:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.129 14:21:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.129 14:21:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.129 14:21:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:55.129 14:21:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:14:55.129 14:21:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.129 14:21:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.129 14:21:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.129 14:21:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.129 14:21:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.129 14:21:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.129 14:21:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.129 14:21:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.129 14:21:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.129 14:21:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.129 14:21:00 -- paths/export.sh@5 -- # export PATH 00:14:55.129 14:21:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.129 14:21:00 -- nvmf/common.sh@46 -- # : 0 00:14:55.129 14:21:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.129 14:21:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.129 14:21:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.129 14:21:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.129 14:21:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.129 14:21:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.129 14:21:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.129 14:21:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.129 14:21:00 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.129 14:21:00 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.129 14:21:00 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:55.129 14:21:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.129 14:21:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.129 14:21:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.129 14:21:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.129 14:21:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.129 14:21:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.129 14:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.129 14:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.129 14:21:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:55.129 14:21:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:55.129 14:21:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:55.129 14:21:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:55.129 14:21:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:55.129 14:21:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:55.129 14:21:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.129 14:21:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.129 14:21:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.129 14:21:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:55.129 14:21:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.129 14:21:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.129 14:21:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.129 14:21:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.129 14:21:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.129 14:21:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.129 14:21:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.129 14:21:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.129 14:21:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:55.129 14:21:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:55.129 Cannot find device "nvmf_tgt_br" 00:14:55.129 14:21:00 -- nvmf/common.sh@154 -- # true 00:14:55.129 14:21:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.129 Cannot find device "nvmf_tgt_br2" 00:14:55.129 14:21:00 -- nvmf/common.sh@155 -- # true 00:14:55.129 14:21:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:55.129 14:21:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:55.129 Cannot find device "nvmf_tgt_br" 00:14:55.129 14:21:00 -- nvmf/common.sh@157 -- # true 00:14:55.129 14:21:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:55.129 Cannot find device "nvmf_tgt_br2" 00:14:55.129 14:21:00 -- nvmf/common.sh@158 -- # true 00:14:55.129 14:21:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:55.129 14:21:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:55.129 14:21:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.129 14:21:00 -- nvmf/common.sh@161 -- # true 00:14:55.129 14:21:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.129 14:21:00 -- nvmf/common.sh@162 -- # true 00:14:55.129 14:21:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.129 14:21:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.129 14:21:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.129 14:21:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.129 14:21:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.129 14:21:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.388 14:21:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.388 14:21:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:55.388 14:21:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:55.388 14:21:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:55.388 14:21:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:55.388 14:21:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:55.388 14:21:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:55.388 14:21:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.388 14:21:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.388 14:21:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.388 14:21:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:55.388 14:21:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:55.388 14:21:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.388 14:21:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.388 14:21:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.388 14:21:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.388 14:21:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.388 14:21:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:55.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:55.388 00:14:55.388 --- 10.0.0.2 ping statistics --- 00:14:55.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.388 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:55.388 14:21:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:55.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:55.389 00:14:55.389 --- 10.0.0.3 ping statistics --- 00:14:55.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.389 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:55.389 14:21:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:55.389 00:14:55.389 --- 10.0.0.1 ping statistics --- 00:14:55.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.389 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:55.389 14:21:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.389 14:21:00 -- nvmf/common.sh@421 -- # return 0 00:14:55.389 14:21:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:55.389 14:21:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.389 14:21:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:55.389 14:21:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:55.389 14:21:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.389 14:21:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:55.389 14:21:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:55.389 14:21:00 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:55.389 14:21:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:55.389 14:21:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.389 14:21:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.389 14:21:00 -- nvmf/common.sh@469 -- # nvmfpid=83724 00:14:55.389 14:21:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:55.389 14:21:00 -- nvmf/common.sh@470 -- # waitforlisten 83724 00:14:55.389 14:21:00 -- common/autotest_common.sh@829 -- # '[' -z 83724 ']' 00:14:55.389 14:21:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.389 14:21:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.389 14:21:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.389 14:21:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.389 14:21:00 -- common/autotest_common.sh@10 -- # set +x 00:14:55.389 [2024-12-05 14:21:00.984387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:55.389 [2024-12-05 14:21:00.984475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.648 [2024-12-05 14:21:01.125188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.648 [2024-12-05 14:21:01.198384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:55.648 [2024-12-05 14:21:01.198536] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.648 [2024-12-05 14:21:01.198562] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.648 [2024-12-05 14:21:01.198571] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.648 [2024-12-05 14:21:01.198604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.216 14:21:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.216 14:21:01 -- common/autotest_common.sh@862 -- # return 0 00:14:56.216 14:21:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:56.216 14:21:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.216 14:21:01 -- common/autotest_common.sh@10 -- # set +x 00:14:56.475 14:21:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.475 14:21:01 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.733 [2024-12-05 14:21:02.175174] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:56.733 14:21:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:56.733 14:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.733 14:21:02 -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 ************************************ 00:14:56.733 START TEST lvs_grow_clean 00:14:56.733 ************************************ 00:14:56.733 14:21:02 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:56.733 14:21:02 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.991 14:21:02 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:56.991 14:21:02 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:57.249 14:21:02 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:14:57.249 14:21:02 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:57.249 14:21:02 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:14:57.506 14:21:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:57.506 14:21:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:57.506 14:21:03 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 lvol 150 00:14:57.765 14:21:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5553ebc6-39d2-4d7f-b3b7-90064e98844f 00:14:57.765 14:21:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:57.765 14:21:03 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:58.023 [2024-12-05 14:21:03.491590] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:58.023 [2024-12-05 14:21:03.491666] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:58.023 true 00:14:58.023 14:21:03 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:14:58.023 14:21:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:58.281 14:21:03 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:58.281 14:21:03 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:58.540 14:21:03 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5553ebc6-39d2-4d7f-b3b7-90064e98844f 00:14:58.540 14:21:04 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:58.798 [2024-12-05 14:21:04.360135] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.798 14:21:04 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.057 14:21:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83885 00:14:59.058 14:21:04 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:59.058 14:21:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.058 14:21:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83885 /var/tmp/bdevperf.sock 00:14:59.058 14:21:04 -- common/autotest_common.sh@829 -- # '[' -z 83885 ']' 00:14:59.058 14:21:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.058 14:21:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.058 14:21:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.058 14:21:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.058 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:14:59.058 [2024-12-05 14:21:04.628991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.058 [2024-12-05 14:21:04.629083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83885 ] 00:14:59.316 [2024-12-05 14:21:04.768346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.316 [2024-12-05 14:21:04.823001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.268 14:21:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.268 14:21:05 -- common/autotest_common.sh@862 -- # return 0 00:15:00.268 14:21:05 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:00.268 Nvme0n1 00:15:00.268 14:21:05 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:00.526 [ 00:15:00.526 { 00:15:00.526 "aliases": [ 00:15:00.526 "5553ebc6-39d2-4d7f-b3b7-90064e98844f" 00:15:00.526 ], 00:15:00.526 "assigned_rate_limits": { 00:15:00.526 "r_mbytes_per_sec": 0, 00:15:00.526 "rw_ios_per_sec": 0, 00:15:00.526 "rw_mbytes_per_sec": 0, 00:15:00.526 "w_mbytes_per_sec": 0 00:15:00.526 }, 00:15:00.526 "block_size": 4096, 00:15:00.526 "claimed": false, 00:15:00.526 "driver_specific": { 00:15:00.526 "mp_policy": "active_passive", 00:15:00.526 "nvme": [ 00:15:00.526 { 00:15:00.526 "ctrlr_data": { 00:15:00.526 "ana_reporting": false, 00:15:00.526 "cntlid": 1, 00:15:00.526 "firmware_revision": "24.01.1", 00:15:00.526 "model_number": "SPDK bdev Controller", 00:15:00.526 "multi_ctrlr": true, 00:15:00.526 "oacs": { 00:15:00.526 "firmware": 0, 00:15:00.526 "format": 0, 00:15:00.526 "ns_manage": 0, 00:15:00.526 "security": 0 00:15:00.526 }, 00:15:00.526 "serial_number": "SPDK0", 00:15:00.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.526 "vendor_id": "0x8086" 00:15:00.526 }, 00:15:00.526 "ns_data": { 00:15:00.526 "can_share": true, 00:15:00.526 "id": 1 00:15:00.526 }, 00:15:00.526 "trid": { 00:15:00.526 "adrfam": "IPv4", 00:15:00.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.526 "traddr": "10.0.0.2", 00:15:00.526 "trsvcid": "4420", 00:15:00.526 "trtype": "TCP" 00:15:00.526 }, 00:15:00.526 "vs": { 00:15:00.526 "nvme_version": "1.3" 00:15:00.526 } 00:15:00.526 } 00:15:00.526 ] 00:15:00.526 }, 00:15:00.526 "name": "Nvme0n1", 00:15:00.526 "num_blocks": 38912, 00:15:00.526 "product_name": "NVMe disk", 00:15:00.526 "supported_io_types": { 00:15:00.526 "abort": true, 00:15:00.526 "compare": true, 00:15:00.526 "compare_and_write": true, 00:15:00.526 "flush": true, 00:15:00.526 "nvme_admin": true, 00:15:00.526 "nvme_io": true, 00:15:00.526 "read": true, 00:15:00.526 "reset": true, 00:15:00.526 "unmap": true, 00:15:00.526 "write": true, 00:15:00.526 "write_zeroes": true 00:15:00.526 }, 00:15:00.526 "uuid": "5553ebc6-39d2-4d7f-b3b7-90064e98844f", 00:15:00.526 "zoned": false 00:15:00.526 } 00:15:00.526 ] 00:15:00.526 14:21:06 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.526 14:21:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83933 00:15:00.526 14:21:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:00.784 Running I/O for 10 seconds... 00:15:01.719 Latency(us) 00:15:01.719 [2024-12-05T14:21:07.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.719 [2024-12-05T14:21:07.367Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.719 Nvme0n1 : 1.00 9581.00 37.43 0.00 0.00 0.00 0.00 0.00 00:15:01.719 [2024-12-05T14:21:07.367Z] =================================================================================================================== 00:15:01.719 [2024-12-05T14:21:07.367Z] Total : 9581.00 37.43 0.00 0.00 0.00 0.00 0.00 00:15:01.719 00:15:02.653 14:21:08 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:02.653 [2024-12-05T14:21:08.301Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.653 Nvme0n1 : 2.00 9655.50 37.72 0.00 0.00 0.00 0.00 0.00 00:15:02.653 [2024-12-05T14:21:08.301Z] =================================================================================================================== 00:15:02.653 [2024-12-05T14:21:08.301Z] Total : 9655.50 37.72 0.00 0.00 0.00 0.00 0.00 00:15:02.653 00:15:02.911 true 00:15:02.911 14:21:08 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:02.911 14:21:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:03.169 14:21:08 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:03.169 14:21:08 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:03.169 14:21:08 -- target/nvmf_lvs_grow.sh@65 -- # wait 83933 00:15:03.735 [2024-12-05T14:21:09.383Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.735 Nvme0n1 : 3.00 9559.00 37.34 0.00 0.00 0.00 0.00 0.00 00:15:03.735 [2024-12-05T14:21:09.383Z] =================================================================================================================== 00:15:03.735 [2024-12-05T14:21:09.383Z] Total : 9559.00 37.34 0.00 0.00 0.00 0.00 0.00 00:15:03.735 00:15:04.668 [2024-12-05T14:21:10.316Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.668 Nvme0n1 : 4.00 9460.50 36.96 0.00 0.00 0.00 0.00 0.00 00:15:04.668 [2024-12-05T14:21:10.316Z] =================================================================================================================== 00:15:04.668 [2024-12-05T14:21:10.316Z] Total : 9460.50 36.96 0.00 0.00 0.00 0.00 0.00 00:15:04.668 00:15:05.603 [2024-12-05T14:21:11.251Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.603 Nvme0n1 : 5.00 9438.00 36.87 0.00 0.00 0.00 0.00 0.00 00:15:05.603 [2024-12-05T14:21:11.251Z] =================================================================================================================== 00:15:05.603 [2024-12-05T14:21:11.251Z] Total : 9438.00 36.87 0.00 0.00 0.00 0.00 0.00 00:15:05.603 00:15:06.979 [2024-12-05T14:21:12.627Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.979 Nvme0n1 : 6.00 9409.33 36.76 0.00 0.00 0.00 0.00 0.00 00:15:06.979 [2024-12-05T14:21:12.627Z] =================================================================================================================== 00:15:06.979 [2024-12-05T14:21:12.627Z] Total : 9409.33 36.76 0.00 0.00 0.00 0.00 0.00 00:15:06.979 00:15:07.958 [2024-12-05T14:21:13.606Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.959 Nvme0n1 : 7.00 9234.57 36.07 0.00 0.00 0.00 0.00 0.00 00:15:07.959 [2024-12-05T14:21:13.607Z] =================================================================================================================== 00:15:07.959 [2024-12-05T14:21:13.607Z] Total : 9234.57 36.07 0.00 0.00 0.00 0.00 0.00 00:15:07.959 00:15:08.905 [2024-12-05T14:21:14.553Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.905 Nvme0n1 : 8.00 9195.62 35.92 0.00 0.00 0.00 0.00 0.00 00:15:08.905 [2024-12-05T14:21:14.553Z] =================================================================================================================== 00:15:08.905 [2024-12-05T14:21:14.553Z] Total : 9195.62 35.92 0.00 0.00 0.00 0.00 0.00 00:15:08.905 00:15:09.840 [2024-12-05T14:21:15.488Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.840 Nvme0n1 : 9.00 9203.67 35.95 0.00 0.00 0.00 0.00 0.00 00:15:09.840 [2024-12-05T14:21:15.488Z] =================================================================================================================== 00:15:09.840 [2024-12-05T14:21:15.488Z] Total : 9203.67 35.95 0.00 0.00 0.00 0.00 0.00 00:15:09.840 00:15:10.775 [2024-12-05T14:21:16.423Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.775 Nvme0n1 : 10.00 9213.20 35.99 0.00 0.00 0.00 0.00 0.00 00:15:10.775 [2024-12-05T14:21:16.423Z] =================================================================================================================== 00:15:10.775 [2024-12-05T14:21:16.423Z] Total : 9213.20 35.99 0.00 0.00 0.00 0.00 0.00 00:15:10.775 00:15:10.775 00:15:10.775 Latency(us) 00:15:10.775 [2024-12-05T14:21:16.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.775 [2024-12-05T14:21:16.423Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.775 Nvme0n1 : 10.01 9217.55 36.01 0.00 0.00 13880.61 6523.81 140127.88 00:15:10.775 [2024-12-05T14:21:16.423Z] =================================================================================================================== 00:15:10.775 [2024-12-05T14:21:16.423Z] Total : 9217.55 36.01 0.00 0.00 13880.61 6523.81 140127.88 00:15:10.775 0 00:15:10.775 14:21:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83885 00:15:10.775 14:21:16 -- common/autotest_common.sh@936 -- # '[' -z 83885 ']' 00:15:10.775 14:21:16 -- common/autotest_common.sh@940 -- # kill -0 83885 00:15:10.775 14:21:16 -- common/autotest_common.sh@941 -- # uname 00:15:10.775 14:21:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.775 14:21:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83885 00:15:10.775 14:21:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:10.775 killing process with pid 83885 00:15:10.775 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.775 00:15:10.775 Latency(us) 00:15:10.775 [2024-12-05T14:21:16.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.775 [2024-12-05T14:21:16.423Z] =================================================================================================================== 00:15:10.775 [2024-12-05T14:21:16.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.775 14:21:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:10.775 14:21:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83885' 00:15:10.775 14:21:16 -- common/autotest_common.sh@955 -- # kill 83885 00:15:10.775 14:21:16 -- common/autotest_common.sh@960 -- # wait 83885 00:15:11.033 14:21:16 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:11.290 14:21:16 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:11.290 14:21:16 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:11.548 14:21:16 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:11.548 14:21:16 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:11.548 14:21:16 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.548 [2024-12-05 14:21:17.185485] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:11.807 14:21:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:11.807 14:21:17 -- common/autotest_common.sh@650 -- # local es=0 00:15:11.807 14:21:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:11.807 14:21:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.807 14:21:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.807 14:21:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.807 14:21:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.807 14:21:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.807 14:21:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.807 14:21:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.807 14:21:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:11.807 14:21:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:11.807 2024/12/05 14:21:17 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e2c82f49-d488-4ee3-a012-cc0e68ac9540], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:11.807 request: 00:15:11.807 { 00:15:11.807 "method": "bdev_lvol_get_lvstores", 00:15:11.807 "params": { 00:15:11.807 "uuid": "e2c82f49-d488-4ee3-a012-cc0e68ac9540" 00:15:11.807 } 00:15:11.807 } 00:15:11.807 Got JSON-RPC error response 00:15:11.807 GoRPCClient: error on JSON-RPC call 00:15:11.807 14:21:17 -- common/autotest_common.sh@653 -- # es=1 00:15:11.807 14:21:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.807 14:21:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.807 14:21:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.808 14:21:17 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.066 aio_bdev 00:15:12.066 14:21:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5553ebc6-39d2-4d7f-b3b7-90064e98844f 00:15:12.066 14:21:17 -- common/autotest_common.sh@897 -- # local bdev_name=5553ebc6-39d2-4d7f-b3b7-90064e98844f 00:15:12.066 14:21:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.066 14:21:17 -- common/autotest_common.sh@899 -- # local i 00:15:12.066 14:21:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.066 14:21:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.066 14:21:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.325 14:21:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5553ebc6-39d2-4d7f-b3b7-90064e98844f -t 2000 00:15:12.584 [ 00:15:12.584 { 00:15:12.584 "aliases": [ 00:15:12.584 "lvs/lvol" 00:15:12.584 ], 00:15:12.584 "assigned_rate_limits": { 00:15:12.584 "r_mbytes_per_sec": 0, 00:15:12.584 "rw_ios_per_sec": 0, 00:15:12.584 "rw_mbytes_per_sec": 0, 00:15:12.584 "w_mbytes_per_sec": 0 00:15:12.584 }, 00:15:12.584 "block_size": 4096, 00:15:12.584 "claimed": false, 00:15:12.584 "driver_specific": { 00:15:12.584 "lvol": { 00:15:12.584 "base_bdev": "aio_bdev", 00:15:12.584 "clone": false, 00:15:12.584 "esnap_clone": false, 00:15:12.584 "lvol_store_uuid": "e2c82f49-d488-4ee3-a012-cc0e68ac9540", 00:15:12.584 "snapshot": false, 00:15:12.584 "thin_provision": false 00:15:12.584 } 00:15:12.584 }, 00:15:12.584 "name": "5553ebc6-39d2-4d7f-b3b7-90064e98844f", 00:15:12.584 "num_blocks": 38912, 00:15:12.584 "product_name": "Logical Volume", 00:15:12.584 "supported_io_types": { 00:15:12.584 "abort": false, 00:15:12.584 "compare": false, 00:15:12.584 "compare_and_write": false, 00:15:12.584 "flush": false, 00:15:12.584 "nvme_admin": false, 00:15:12.584 "nvme_io": false, 00:15:12.584 "read": true, 00:15:12.584 "reset": true, 00:15:12.584 "unmap": true, 00:15:12.584 "write": true, 00:15:12.584 "write_zeroes": true 00:15:12.584 }, 00:15:12.584 "uuid": "5553ebc6-39d2-4d7f-b3b7-90064e98844f", 00:15:12.584 "zoned": false 00:15:12.584 } 00:15:12.584 ] 00:15:12.584 14:21:18 -- common/autotest_common.sh@905 -- # return 0 00:15:12.584 14:21:18 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:12.584 14:21:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:12.584 14:21:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:12.584 14:21:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:12.584 14:21:18 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:12.844 14:21:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:12.844 14:21:18 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5553ebc6-39d2-4d7f-b3b7-90064e98844f 00:15:13.104 14:21:18 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2c82f49-d488-4ee3-a012-cc0e68ac9540 00:15:13.363 14:21:18 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.622 14:21:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.880 00:15:13.880 real 0m17.271s 00:15:13.880 user 0m16.641s 00:15:13.880 sys 0m2.083s 00:15:13.880 14:21:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.880 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:15:13.880 ************************************ 00:15:13.880 END TEST lvs_grow_clean 00:15:13.880 ************************************ 00:15:13.880 14:21:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:13.880 14:21:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.880 14:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.880 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.137 ************************************ 00:15:14.138 START TEST lvs_grow_dirty 00:15:14.138 ************************************ 00:15:14.138 14:21:19 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:14.138 14:21:19 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:14.396 14:21:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:14.396 14:21:19 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:14.655 14:21:20 -- target/nvmf_lvs_grow.sh@28 -- # lvs=90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:14.655 14:21:20 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:14.655 14:21:20 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:14.914 14:21:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:14.914 14:21:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:14.914 14:21:20 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab lvol 150 00:15:15.173 14:21:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:15.173 14:21:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:15.173 14:21:20 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:15.173 [2024-12-05 14:21:20.774396] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:15.173 [2024-12-05 14:21:20.774462] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:15.173 true 00:15:15.173 14:21:20 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:15.173 14:21:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:15.740 14:21:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:15.740 14:21:21 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:15.740 14:21:21 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:15.999 14:21:21 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:16.258 14:21:21 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.517 14:21:22 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:16.517 14:21:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84313 00:15:16.517 14:21:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.517 14:21:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84313 /var/tmp/bdevperf.sock 00:15:16.517 14:21:22 -- common/autotest_common.sh@829 -- # '[' -z 84313 ']' 00:15:16.517 14:21:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.517 14:21:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.517 14:21:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.517 14:21:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.517 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:15:16.517 [2024-12-05 14:21:22.154519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.517 [2024-12-05 14:21:22.154614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84313 ] 00:15:16.776 [2024-12-05 14:21:22.289310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.776 [2024-12-05 14:21:22.359193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.712 14:21:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.712 14:21:23 -- common/autotest_common.sh@862 -- # return 0 00:15:17.712 14:21:23 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:17.712 Nvme0n1 00:15:17.712 14:21:23 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:17.972 [ 00:15:17.972 { 00:15:17.972 "aliases": [ 00:15:17.972 "5827ad12-ad44-4035-9e19-6c5c2a061570" 00:15:17.972 ], 00:15:17.972 "assigned_rate_limits": { 00:15:17.972 "r_mbytes_per_sec": 0, 00:15:17.972 "rw_ios_per_sec": 0, 00:15:17.972 "rw_mbytes_per_sec": 0, 00:15:17.972 "w_mbytes_per_sec": 0 00:15:17.972 }, 00:15:17.972 "block_size": 4096, 00:15:17.972 "claimed": false, 00:15:17.972 "driver_specific": { 00:15:17.972 "mp_policy": "active_passive", 00:15:17.972 "nvme": [ 00:15:17.972 { 00:15:17.972 "ctrlr_data": { 00:15:17.972 "ana_reporting": false, 00:15:17.972 "cntlid": 1, 00:15:17.972 "firmware_revision": "24.01.1", 00:15:17.972 "model_number": "SPDK bdev Controller", 00:15:17.972 "multi_ctrlr": true, 00:15:17.972 "oacs": { 00:15:17.972 "firmware": 0, 00:15:17.972 "format": 0, 00:15:17.972 "ns_manage": 0, 00:15:17.972 "security": 0 00:15:17.972 }, 00:15:17.972 "serial_number": "SPDK0", 00:15:17.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.972 "vendor_id": "0x8086" 00:15:17.972 }, 00:15:17.972 "ns_data": { 00:15:17.972 "can_share": true, 00:15:17.972 "id": 1 00:15:17.972 }, 00:15:17.972 "trid": { 00:15:17.972 "adrfam": "IPv4", 00:15:17.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.972 "traddr": "10.0.0.2", 00:15:17.972 "trsvcid": "4420", 00:15:17.972 "trtype": "TCP" 00:15:17.972 }, 00:15:17.972 "vs": { 00:15:17.972 "nvme_version": "1.3" 00:15:17.972 } 00:15:17.972 } 00:15:17.972 ] 00:15:17.972 }, 00:15:17.972 "name": "Nvme0n1", 00:15:17.972 "num_blocks": 38912, 00:15:17.972 "product_name": "NVMe disk", 00:15:17.972 "supported_io_types": { 00:15:17.972 "abort": true, 00:15:17.972 "compare": true, 00:15:17.972 "compare_and_write": true, 00:15:17.972 "flush": true, 00:15:17.972 "nvme_admin": true, 00:15:17.972 "nvme_io": true, 00:15:17.972 "read": true, 00:15:17.972 "reset": true, 00:15:17.972 "unmap": true, 00:15:17.972 "write": true, 00:15:17.972 "write_zeroes": true 00:15:17.972 }, 00:15:17.972 "uuid": "5827ad12-ad44-4035-9e19-6c5c2a061570", 00:15:17.972 "zoned": false 00:15:17.972 } 00:15:17.972 ] 00:15:17.972 14:21:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84361 00:15:17.972 14:21:23 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.972 14:21:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:18.231 Running I/O for 10 seconds... 00:15:19.168 Latency(us) 00:15:19.168 [2024-12-05T14:21:24.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.168 [2024-12-05T14:21:24.816Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.168 Nvme0n1 : 1.00 9790.00 38.24 0.00 0.00 0.00 0.00 0.00 00:15:19.168 [2024-12-05T14:21:24.816Z] =================================================================================================================== 00:15:19.168 [2024-12-05T14:21:24.816Z] Total : 9790.00 38.24 0.00 0.00 0.00 0.00 0.00 00:15:19.168 00:15:20.101 14:21:25 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:20.101 [2024-12-05T14:21:25.749Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.101 Nvme0n1 : 2.00 9684.00 37.83 0.00 0.00 0.00 0.00 0.00 00:15:20.101 [2024-12-05T14:21:25.749Z] =================================================================================================================== 00:15:20.101 [2024-12-05T14:21:25.749Z] Total : 9684.00 37.83 0.00 0.00 0.00 0.00 0.00 00:15:20.101 00:15:20.358 true 00:15:20.358 14:21:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:20.358 14:21:25 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:20.617 14:21:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:20.617 14:21:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:20.617 14:21:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 84361 00:15:21.182 [2024-12-05T14:21:26.830Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.182 Nvme0n1 : 3.00 9663.33 37.75 0.00 0.00 0.00 0.00 0.00 00:15:21.182 [2024-12-05T14:21:26.830Z] =================================================================================================================== 00:15:21.182 [2024-12-05T14:21:26.830Z] Total : 9663.33 37.75 0.00 0.00 0.00 0.00 0.00 00:15:21.182 00:15:22.118 [2024-12-05T14:21:27.766Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.118 Nvme0n1 : 4.00 9348.50 36.52 0.00 0.00 0.00 0.00 0.00 00:15:22.118 [2024-12-05T14:21:27.766Z] =================================================================================================================== 00:15:22.118 [2024-12-05T14:21:27.766Z] Total : 9348.50 36.52 0.00 0.00 0.00 0.00 0.00 00:15:22.118 00:15:23.054 [2024-12-05T14:21:28.702Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.054 Nvme0n1 : 5.00 9433.00 36.85 0.00 0.00 0.00 0.00 0.00 00:15:23.054 [2024-12-05T14:21:28.702Z] =================================================================================================================== 00:15:23.054 [2024-12-05T14:21:28.702Z] Total : 9433.00 36.85 0.00 0.00 0.00 0.00 0.00 00:15:23.054 00:15:24.431 [2024-12-05T14:21:30.079Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.431 Nvme0n1 : 6.00 9471.67 37.00 0.00 0.00 0.00 0.00 0.00 00:15:24.431 [2024-12-05T14:21:30.079Z] =================================================================================================================== 00:15:24.431 [2024-12-05T14:21:30.079Z] Total : 9471.67 37.00 0.00 0.00 0.00 0.00 0.00 00:15:24.431 00:15:25.367 [2024-12-05T14:21:31.015Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.367 Nvme0n1 : 7.00 9353.29 36.54 0.00 0.00 0.00 0.00 0.00 00:15:25.367 [2024-12-05T14:21:31.015Z] =================================================================================================================== 00:15:25.367 [2024-12-05T14:21:31.015Z] Total : 9353.29 36.54 0.00 0.00 0.00 0.00 0.00 00:15:25.367 00:15:26.304 [2024-12-05T14:21:31.952Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.304 Nvme0n1 : 8.00 9309.25 36.36 0.00 0.00 0.00 0.00 0.00 00:15:26.304 [2024-12-05T14:21:31.952Z] =================================================================================================================== 00:15:26.304 [2024-12-05T14:21:31.952Z] Total : 9309.25 36.36 0.00 0.00 0.00 0.00 0.00 00:15:26.304 00:15:27.240 [2024-12-05T14:21:32.888Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.240 Nvme0n1 : 9.00 9267.67 36.20 0.00 0.00 0.00 0.00 0.00 00:15:27.240 [2024-12-05T14:21:32.888Z] =================================================================================================================== 00:15:27.240 [2024-12-05T14:21:32.888Z] Total : 9267.67 36.20 0.00 0.00 0.00 0.00 0.00 00:15:27.240 00:15:28.176 [2024-12-05T14:21:33.824Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.176 Nvme0n1 : 10.00 9196.40 35.92 0.00 0.00 0.00 0.00 0.00 00:15:28.176 [2024-12-05T14:21:33.824Z] =================================================================================================================== 00:15:28.176 [2024-12-05T14:21:33.824Z] Total : 9196.40 35.92 0.00 0.00 0.00 0.00 0.00 00:15:28.176 00:15:28.176 00:15:28.176 Latency(us) 00:15:28.176 [2024-12-05T14:21:33.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.176 [2024-12-05T14:21:33.824Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.176 Nvme0n1 : 10.01 9198.05 35.93 0.00 0.00 13911.59 4379.00 151566.89 00:15:28.176 [2024-12-05T14:21:33.824Z] =================================================================================================================== 00:15:28.176 [2024-12-05T14:21:33.824Z] Total : 9198.05 35.93 0.00 0.00 13911.59 4379.00 151566.89 00:15:28.176 0 00:15:28.176 14:21:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84313 00:15:28.176 14:21:33 -- common/autotest_common.sh@936 -- # '[' -z 84313 ']' 00:15:28.176 14:21:33 -- common/autotest_common.sh@940 -- # kill -0 84313 00:15:28.176 14:21:33 -- common/autotest_common.sh@941 -- # uname 00:15:28.176 14:21:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.176 14:21:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84313 00:15:28.176 killing process with pid 84313 00:15:28.176 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.176 00:15:28.176 Latency(us) 00:15:28.176 [2024-12-05T14:21:33.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.176 [2024-12-05T14:21:33.824Z] =================================================================================================================== 00:15:28.176 [2024-12-05T14:21:33.824Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.176 14:21:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:28.176 14:21:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:28.177 14:21:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84313' 00:15:28.177 14:21:33 -- common/autotest_common.sh@955 -- # kill 84313 00:15:28.177 14:21:33 -- common/autotest_common.sh@960 -- # wait 84313 00:15:28.436 14:21:33 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:28.695 14:21:34 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:28.695 14:21:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:28.953 14:21:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:28.953 14:21:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:28.953 14:21:34 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83724 00:15:28.953 14:21:34 -- target/nvmf_lvs_grow.sh@74 -- # wait 83724 00:15:28.953 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83724 Killed "${NVMF_APP[@]}" "$@" 00:15:28.953 14:21:34 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:28.953 14:21:34 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:28.953 14:21:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.953 14:21:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.953 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:28.953 14:21:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:28.953 14:21:34 -- nvmf/common.sh@469 -- # nvmfpid=84513 00:15:28.953 14:21:34 -- nvmf/common.sh@470 -- # waitforlisten 84513 00:15:28.953 14:21:34 -- common/autotest_common.sh@829 -- # '[' -z 84513 ']' 00:15:28.953 14:21:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.953 14:21:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.953 14:21:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.953 14:21:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.953 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:28.953 [2024-12-05 14:21:34.550744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:28.953 [2024-12-05 14:21:34.550825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.213 [2024-12-05 14:21:34.683600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.213 [2024-12-05 14:21:34.760918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:29.213 [2024-12-05 14:21:34.761063] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.213 [2024-12-05 14:21:34.761077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.213 [2024-12-05 14:21:34.761085] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.213 [2024-12-05 14:21:34.761117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.148 14:21:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.148 14:21:35 -- common/autotest_common.sh@862 -- # return 0 00:15:30.148 14:21:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:30.148 14:21:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.148 14:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:30.148 14:21:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.148 14:21:35 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:30.405 [2024-12-05 14:21:35.854167] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:30.405 [2024-12-05 14:21:35.854553] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:30.405 [2024-12-05 14:21:35.854770] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:30.405 14:21:35 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:30.405 14:21:35 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:30.405 14:21:35 -- common/autotest_common.sh@897 -- # local bdev_name=5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:30.405 14:21:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.405 14:21:35 -- common/autotest_common.sh@899 -- # local i 00:15:30.405 14:21:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.405 14:21:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.405 14:21:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:30.662 14:21:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5827ad12-ad44-4035-9e19-6c5c2a061570 -t 2000 00:15:30.921 [ 00:15:30.921 { 00:15:30.921 "aliases": [ 00:15:30.921 "lvs/lvol" 00:15:30.921 ], 00:15:30.921 "assigned_rate_limits": { 00:15:30.921 "r_mbytes_per_sec": 0, 00:15:30.921 "rw_ios_per_sec": 0, 00:15:30.921 "rw_mbytes_per_sec": 0, 00:15:30.921 "w_mbytes_per_sec": 0 00:15:30.921 }, 00:15:30.921 "block_size": 4096, 00:15:30.921 "claimed": false, 00:15:30.921 "driver_specific": { 00:15:30.921 "lvol": { 00:15:30.921 "base_bdev": "aio_bdev", 00:15:30.921 "clone": false, 00:15:30.921 "esnap_clone": false, 00:15:30.921 "lvol_store_uuid": "90e26b76-2936-4bfb-8bc0-3f9f880c9cab", 00:15:30.921 "snapshot": false, 00:15:30.921 "thin_provision": false 00:15:30.921 } 00:15:30.921 }, 00:15:30.921 "name": "5827ad12-ad44-4035-9e19-6c5c2a061570", 00:15:30.921 "num_blocks": 38912, 00:15:30.921 "product_name": "Logical Volume", 00:15:30.921 "supported_io_types": { 00:15:30.921 "abort": false, 00:15:30.921 "compare": false, 00:15:30.921 "compare_and_write": false, 00:15:30.921 "flush": false, 00:15:30.921 "nvme_admin": false, 00:15:30.921 "nvme_io": false, 00:15:30.921 "read": true, 00:15:30.921 "reset": true, 00:15:30.921 "unmap": true, 00:15:30.921 "write": true, 00:15:30.921 "write_zeroes": true 00:15:30.921 }, 00:15:30.921 "uuid": "5827ad12-ad44-4035-9e19-6c5c2a061570", 00:15:30.921 "zoned": false 00:15:30.921 } 00:15:30.921 ] 00:15:30.921 14:21:36 -- common/autotest_common.sh@905 -- # return 0 00:15:30.921 14:21:36 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:30.921 14:21:36 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:31.179 14:21:36 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:31.179 14:21:36 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:31.179 14:21:36 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:31.437 14:21:36 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:31.437 14:21:36 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:31.695 [2024-12-05 14:21:37.147551] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:31.695 14:21:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:31.695 14:21:37 -- common/autotest_common.sh@650 -- # local es=0 00:15:31.695 14:21:37 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:31.695 14:21:37 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.695 14:21:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.695 14:21:37 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.695 14:21:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.695 14:21:37 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.695 14:21:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.695 14:21:37 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.695 14:21:37 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:31.695 14:21:37 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:31.953 2024/12/05 14:21:37 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:90e26b76-2936-4bfb-8bc0-3f9f880c9cab], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:31.953 request: 00:15:31.953 { 00:15:31.953 "method": "bdev_lvol_get_lvstores", 00:15:31.953 "params": { 00:15:31.953 "uuid": "90e26b76-2936-4bfb-8bc0-3f9f880c9cab" 00:15:31.953 } 00:15:31.953 } 00:15:31.954 Got JSON-RPC error response 00:15:31.954 GoRPCClient: error on JSON-RPC call 00:15:31.954 14:21:37 -- common/autotest_common.sh@653 -- # es=1 00:15:31.954 14:21:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.954 14:21:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.954 14:21:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.954 14:21:37 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:32.212 aio_bdev 00:15:32.212 14:21:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:32.212 14:21:37 -- common/autotest_common.sh@897 -- # local bdev_name=5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:32.212 14:21:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:32.212 14:21:37 -- common/autotest_common.sh@899 -- # local i 00:15:32.212 14:21:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:32.212 14:21:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:32.212 14:21:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:32.471 14:21:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5827ad12-ad44-4035-9e19-6c5c2a061570 -t 2000 00:15:32.471 [ 00:15:32.471 { 00:15:32.471 "aliases": [ 00:15:32.471 "lvs/lvol" 00:15:32.471 ], 00:15:32.471 "assigned_rate_limits": { 00:15:32.471 "r_mbytes_per_sec": 0, 00:15:32.471 "rw_ios_per_sec": 0, 00:15:32.471 "rw_mbytes_per_sec": 0, 00:15:32.471 "w_mbytes_per_sec": 0 00:15:32.471 }, 00:15:32.471 "block_size": 4096, 00:15:32.471 "claimed": false, 00:15:32.471 "driver_specific": { 00:15:32.471 "lvol": { 00:15:32.471 "base_bdev": "aio_bdev", 00:15:32.471 "clone": false, 00:15:32.471 "esnap_clone": false, 00:15:32.471 "lvol_store_uuid": "90e26b76-2936-4bfb-8bc0-3f9f880c9cab", 00:15:32.471 "snapshot": false, 00:15:32.471 "thin_provision": false 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 "name": "5827ad12-ad44-4035-9e19-6c5c2a061570", 00:15:32.472 "num_blocks": 38912, 00:15:32.472 "product_name": "Logical Volume", 00:15:32.472 "supported_io_types": { 00:15:32.472 "abort": false, 00:15:32.472 "compare": false, 00:15:32.472 "compare_and_write": false, 00:15:32.472 "flush": false, 00:15:32.472 "nvme_admin": false, 00:15:32.472 "nvme_io": false, 00:15:32.472 "read": true, 00:15:32.472 "reset": true, 00:15:32.472 "unmap": true, 00:15:32.472 "write": true, 00:15:32.472 "write_zeroes": true 00:15:32.472 }, 00:15:32.472 "uuid": "5827ad12-ad44-4035-9e19-6c5c2a061570", 00:15:32.472 "zoned": false 00:15:32.472 } 00:15:32.472 ] 00:15:32.472 14:21:38 -- common/autotest_common.sh@905 -- # return 0 00:15:32.730 14:21:38 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:32.730 14:21:38 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:32.730 14:21:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:32.730 14:21:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:32.730 14:21:38 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:32.988 14:21:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:32.988 14:21:38 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5827ad12-ad44-4035-9e19-6c5c2a061570 00:15:33.245 14:21:38 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90e26b76-2936-4bfb-8bc0-3f9f880c9cab 00:15:33.503 14:21:39 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:33.760 14:21:39 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:34.017 00:15:34.017 real 0m20.049s 00:15:34.017 user 0m40.076s 00:15:34.017 sys 0m8.184s 00:15:34.017 14:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:34.017 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:15:34.017 ************************************ 00:15:34.017 END TEST lvs_grow_dirty 00:15:34.017 ************************************ 00:15:34.017 14:21:39 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:34.017 14:21:39 -- common/autotest_common.sh@806 -- # type=--id 00:15:34.017 14:21:39 -- common/autotest_common.sh@807 -- # id=0 00:15:34.017 14:21:39 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:34.017 14:21:39 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:34.017 14:21:39 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:34.017 14:21:39 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:34.017 14:21:39 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:34.017 14:21:39 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:34.017 nvmf_trace.0 00:15:34.017 14:21:39 -- common/autotest_common.sh@821 -- # return 0 00:15:34.017 14:21:39 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:34.018 14:21:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:34.018 14:21:39 -- nvmf/common.sh@116 -- # sync 00:15:34.584 14:21:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:34.584 14:21:40 -- nvmf/common.sh@119 -- # set +e 00:15:34.584 14:21:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:34.584 14:21:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:34.584 rmmod nvme_tcp 00:15:34.584 rmmod nvme_fabrics 00:15:34.584 rmmod nvme_keyring 00:15:34.584 14:21:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:34.584 14:21:40 -- nvmf/common.sh@123 -- # set -e 00:15:34.584 14:21:40 -- nvmf/common.sh@124 -- # return 0 00:15:34.584 14:21:40 -- nvmf/common.sh@477 -- # '[' -n 84513 ']' 00:15:34.584 14:21:40 -- nvmf/common.sh@478 -- # killprocess 84513 00:15:34.584 14:21:40 -- common/autotest_common.sh@936 -- # '[' -z 84513 ']' 00:15:34.584 14:21:40 -- common/autotest_common.sh@940 -- # kill -0 84513 00:15:34.584 14:21:40 -- common/autotest_common.sh@941 -- # uname 00:15:34.584 14:21:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.584 14:21:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84513 00:15:34.584 14:21:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:34.584 14:21:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:34.584 killing process with pid 84513 00:15:34.584 14:21:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84513' 00:15:34.584 14:21:40 -- common/autotest_common.sh@955 -- # kill 84513 00:15:34.584 14:21:40 -- common/autotest_common.sh@960 -- # wait 84513 00:15:34.842 14:21:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:34.842 14:21:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:34.842 14:21:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:34.842 14:21:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.842 14:21:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:34.842 14:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.842 14:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.842 14:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.842 14:21:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:34.842 ************************************ 00:15:34.842 END TEST nvmf_lvs_grow 00:15:34.843 ************************************ 00:15:34.843 00:15:34.843 real 0m40.077s 00:15:34.843 user 1m3.117s 00:15:34.843 sys 0m11.301s 00:15:34.843 14:21:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:34.843 14:21:40 -- common/autotest_common.sh@10 -- # set +x 00:15:35.102 14:21:40 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:35.102 14:21:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.102 14:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.102 14:21:40 -- common/autotest_common.sh@10 -- # set +x 00:15:35.102 ************************************ 00:15:35.102 START TEST nvmf_bdev_io_wait 00:15:35.102 ************************************ 00:15:35.102 14:21:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:35.102 * Looking for test storage... 00:15:35.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:35.102 14:21:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:35.102 14:21:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:35.102 14:21:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:35.102 14:21:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:35.102 14:21:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:35.102 14:21:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:35.102 14:21:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:35.102 14:21:40 -- scripts/common.sh@335 -- # IFS=.-: 00:15:35.102 14:21:40 -- scripts/common.sh@335 -- # read -ra ver1 00:15:35.102 14:21:40 -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.102 14:21:40 -- scripts/common.sh@336 -- # read -ra ver2 00:15:35.102 14:21:40 -- scripts/common.sh@337 -- # local 'op=<' 00:15:35.102 14:21:40 -- scripts/common.sh@339 -- # ver1_l=2 00:15:35.102 14:21:40 -- scripts/common.sh@340 -- # ver2_l=1 00:15:35.102 14:21:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:35.102 14:21:40 -- scripts/common.sh@343 -- # case "$op" in 00:15:35.102 14:21:40 -- scripts/common.sh@344 -- # : 1 00:15:35.102 14:21:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:35.102 14:21:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.102 14:21:40 -- scripts/common.sh@364 -- # decimal 1 00:15:35.102 14:21:40 -- scripts/common.sh@352 -- # local d=1 00:15:35.102 14:21:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.102 14:21:40 -- scripts/common.sh@354 -- # echo 1 00:15:35.102 14:21:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:35.102 14:21:40 -- scripts/common.sh@365 -- # decimal 2 00:15:35.102 14:21:40 -- scripts/common.sh@352 -- # local d=2 00:15:35.102 14:21:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.102 14:21:40 -- scripts/common.sh@354 -- # echo 2 00:15:35.102 14:21:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:35.102 14:21:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:35.102 14:21:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:35.102 14:21:40 -- scripts/common.sh@367 -- # return 0 00:15:35.102 14:21:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.102 14:21:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.102 --rc genhtml_branch_coverage=1 00:15:35.102 --rc genhtml_function_coverage=1 00:15:35.102 --rc genhtml_legend=1 00:15:35.102 --rc geninfo_all_blocks=1 00:15:35.102 --rc geninfo_unexecuted_blocks=1 00:15:35.102 00:15:35.102 ' 00:15:35.102 14:21:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.102 --rc genhtml_branch_coverage=1 00:15:35.102 --rc genhtml_function_coverage=1 00:15:35.102 --rc genhtml_legend=1 00:15:35.102 --rc geninfo_all_blocks=1 00:15:35.102 --rc geninfo_unexecuted_blocks=1 00:15:35.102 00:15:35.102 ' 00:15:35.102 14:21:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.102 --rc genhtml_branch_coverage=1 00:15:35.102 --rc genhtml_function_coverage=1 00:15:35.102 --rc genhtml_legend=1 00:15:35.102 --rc geninfo_all_blocks=1 00:15:35.102 --rc geninfo_unexecuted_blocks=1 00:15:35.102 00:15:35.102 ' 00:15:35.102 14:21:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:35.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.102 --rc genhtml_branch_coverage=1 00:15:35.102 --rc genhtml_function_coverage=1 00:15:35.102 --rc genhtml_legend=1 00:15:35.102 --rc geninfo_all_blocks=1 00:15:35.102 --rc geninfo_unexecuted_blocks=1 00:15:35.102 00:15:35.102 ' 00:15:35.102 14:21:40 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.102 14:21:40 -- nvmf/common.sh@7 -- # uname -s 00:15:35.102 14:21:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.102 14:21:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.102 14:21:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.102 14:21:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.102 14:21:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.102 14:21:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.102 14:21:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.102 14:21:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.102 14:21:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.102 14:21:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.102 14:21:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:15:35.102 14:21:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:15:35.102 14:21:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.102 14:21:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.102 14:21:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.102 14:21:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.102 14:21:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.102 14:21:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.102 14:21:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.102 14:21:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.103 14:21:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.103 14:21:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.103 14:21:40 -- paths/export.sh@5 -- # export PATH 00:15:35.103 14:21:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.103 14:21:40 -- nvmf/common.sh@46 -- # : 0 00:15:35.103 14:21:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:35.103 14:21:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:35.103 14:21:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:35.103 14:21:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.103 14:21:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.103 14:21:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:35.103 14:21:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:35.103 14:21:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:35.103 14:21:40 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:35.103 14:21:40 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:35.103 14:21:40 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:35.103 14:21:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:35.103 14:21:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.103 14:21:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:35.103 14:21:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:35.103 14:21:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:35.103 14:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.103 14:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.103 14:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.103 14:21:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:35.103 14:21:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:35.103 14:21:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:35.103 14:21:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:35.103 14:21:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:35.103 14:21:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:35.103 14:21:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.103 14:21:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.103 14:21:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:35.103 14:21:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:35.103 14:21:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.103 14:21:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.103 14:21:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.103 14:21:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.103 14:21:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.103 14:21:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.103 14:21:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.103 14:21:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.103 14:21:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:35.103 14:21:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:35.364 Cannot find device "nvmf_tgt_br" 00:15:35.364 14:21:40 -- nvmf/common.sh@154 -- # true 00:15:35.364 14:21:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.364 Cannot find device "nvmf_tgt_br2" 00:15:35.364 14:21:40 -- nvmf/common.sh@155 -- # true 00:15:35.364 14:21:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:35.364 14:21:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:35.364 Cannot find device "nvmf_tgt_br" 00:15:35.364 14:21:40 -- nvmf/common.sh@157 -- # true 00:15:35.364 14:21:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:35.364 Cannot find device "nvmf_tgt_br2" 00:15:35.364 14:21:40 -- nvmf/common.sh@158 -- # true 00:15:35.364 14:21:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:35.364 14:21:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:35.364 14:21:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.364 14:21:40 -- nvmf/common.sh@161 -- # true 00:15:35.364 14:21:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.364 14:21:40 -- nvmf/common.sh@162 -- # true 00:15:35.364 14:21:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.364 14:21:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.364 14:21:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.364 14:21:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.364 14:21:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.364 14:21:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.364 14:21:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.364 14:21:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.364 14:21:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.364 14:21:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:35.364 14:21:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:35.364 14:21:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:35.364 14:21:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:35.364 14:21:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.364 14:21:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.364 14:21:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.364 14:21:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:35.364 14:21:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:35.364 14:21:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.364 14:21:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.364 14:21:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.635 14:21:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.635 14:21:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.635 14:21:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:35.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:35.635 00:15:35.635 --- 10.0.0.2 ping statistics --- 00:15:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.635 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:35.635 14:21:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:35.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:35.635 00:15:35.635 --- 10.0.0.3 ping statistics --- 00:15:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.635 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:35.635 14:21:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:35.635 00:15:35.635 --- 10.0.0.1 ping statistics --- 00:15:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.635 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:35.635 14:21:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.635 14:21:41 -- nvmf/common.sh@421 -- # return 0 00:15:35.635 14:21:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:35.635 14:21:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.635 14:21:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:35.635 14:21:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:35.635 14:21:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.635 14:21:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:35.635 14:21:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:35.635 14:21:41 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:35.635 14:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:35.635 14:21:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.635 14:21:41 -- common/autotest_common.sh@10 -- # set +x 00:15:35.635 14:21:41 -- nvmf/common.sh@469 -- # nvmfpid=84936 00:15:35.635 14:21:41 -- nvmf/common.sh@470 -- # waitforlisten 84936 00:15:35.635 14:21:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:35.635 14:21:41 -- common/autotest_common.sh@829 -- # '[' -z 84936 ']' 00:15:35.635 14:21:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.635 14:21:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.636 14:21:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.636 14:21:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.636 14:21:41 -- common/autotest_common.sh@10 -- # set +x 00:15:35.636 [2024-12-05 14:21:41.115564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.636 [2024-12-05 14:21:41.115659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.636 [2024-12-05 14:21:41.254089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.911 [2024-12-05 14:21:41.337901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:35.911 [2024-12-05 14:21:41.338065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.911 [2024-12-05 14:21:41.338078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.911 [2024-12-05 14:21:41.338086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.911 [2024-12-05 14:21:41.338246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.911 [2024-12-05 14:21:41.338397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.911 [2024-12-05 14:21:41.339096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.911 [2024-12-05 14:21:41.339139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.486 14:21:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.486 14:21:42 -- common/autotest_common.sh@862 -- # return 0 00:15:36.486 14:21:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:36.486 14:21:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.486 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 14:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 [2024-12-05 14:21:42.256804] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 Malloc0 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.746 14:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.746 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.746 [2024-12-05 14:21:42.316697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.746 14:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84993 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@30 -- # READ_PID=84995 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:36.746 14:21:42 -- nvmf/common.sh@520 -- # config=() 00:15:36.746 14:21:42 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.746 14:21:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:36.746 14:21:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.746 { 00:15:36.746 "params": { 00:15:36.746 "name": "Nvme$subsystem", 00:15:36.746 "trtype": "$TEST_TRANSPORT", 00:15:36.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.746 "adrfam": "ipv4", 00:15:36.746 "trsvcid": "$NVMF_PORT", 00:15:36.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.746 "hdgst": ${hdgst:-false}, 00:15:36.746 "ddgst": ${ddgst:-false} 00:15:36.746 }, 00:15:36.746 "method": "bdev_nvme_attach_controller" 00:15:36.746 } 00:15:36.746 EOF 00:15:36.746 )") 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84997 00:15:36.746 14:21:42 -- nvmf/common.sh@520 -- # config=() 00:15:36.746 14:21:42 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.746 14:21:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.746 14:21:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.746 { 00:15:36.746 "params": { 00:15:36.746 "name": "Nvme$subsystem", 00:15:36.746 "trtype": "$TEST_TRANSPORT", 00:15:36.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.746 "adrfam": "ipv4", 00:15:36.746 "trsvcid": "$NVMF_PORT", 00:15:36.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.746 "hdgst": ${hdgst:-false}, 00:15:36.746 "ddgst": ${ddgst:-false} 00:15:36.746 }, 00:15:36.746 "method": "bdev_nvme_attach_controller" 00:15:36.746 } 00:15:36.746 EOF 00:15:36.746 )") 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84999 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@35 -- # sync 00:15:36.746 14:21:42 -- nvmf/common.sh@542 -- # cat 00:15:36.746 14:21:42 -- nvmf/common.sh@542 -- # cat 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:36.746 14:21:42 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:36.746 14:21:42 -- nvmf/common.sh@520 -- # config=() 00:15:36.746 14:21:42 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.746 14:21:42 -- nvmf/common.sh@544 -- # jq . 00:15:36.746 14:21:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.746 14:21:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.746 { 00:15:36.746 "params": { 00:15:36.747 "name": "Nvme$subsystem", 00:15:36.747 "trtype": "$TEST_TRANSPORT", 00:15:36.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.747 "adrfam": "ipv4", 00:15:36.747 "trsvcid": "$NVMF_PORT", 00:15:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.747 "hdgst": ${hdgst:-false}, 00:15:36.747 "ddgst": ${ddgst:-false} 00:15:36.747 }, 00:15:36.747 "method": "bdev_nvme_attach_controller" 00:15:36.747 } 00:15:36.747 EOF 00:15:36.747 )") 00:15:36.747 14:21:42 -- nvmf/common.sh@544 -- # jq . 00:15:36.747 14:21:42 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.747 14:21:42 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:36.747 14:21:42 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:36.747 14:21:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.747 "params": { 00:15:36.747 "name": "Nvme1", 00:15:36.747 "trtype": "tcp", 00:15:36.747 "traddr": "10.0.0.2", 00:15:36.747 "adrfam": "ipv4", 00:15:36.747 "trsvcid": "4420", 00:15:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.747 "hdgst": false, 00:15:36.747 "ddgst": false 00:15:36.747 }, 00:15:36.747 "method": "bdev_nvme_attach_controller" 00:15:36.747 }' 00:15:36.747 14:21:42 -- nvmf/common.sh@520 -- # config=() 00:15:36.747 14:21:42 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.747 14:21:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.747 14:21:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.747 { 00:15:36.747 "params": { 00:15:36.747 "name": "Nvme$subsystem", 00:15:36.747 "trtype": "$TEST_TRANSPORT", 00:15:36.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.747 "adrfam": "ipv4", 00:15:36.747 "trsvcid": "$NVMF_PORT", 00:15:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.747 "hdgst": ${hdgst:-false}, 00:15:36.747 "ddgst": ${ddgst:-false} 00:15:36.747 }, 00:15:36.747 "method": "bdev_nvme_attach_controller" 00:15:36.747 } 00:15:36.747 EOF 00:15:36.747 )") 00:15:36.747 14:21:42 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.747 14:21:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.747 "params": { 00:15:36.747 "name": "Nvme1", 00:15:36.747 "trtype": "tcp", 00:15:36.747 "traddr": "10.0.0.2", 00:15:36.747 "adrfam": "ipv4", 00:15:36.747 "trsvcid": "4420", 00:15:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.747 "hdgst": false, 00:15:36.747 "ddgst": false 00:15:36.747 }, 00:15:36.747 "method": "bdev_nvme_attach_controller" 00:15:36.747 }' 00:15:36.747 14:21:42 -- nvmf/common.sh@542 -- # cat 00:15:36.747 14:21:42 -- nvmf/common.sh@542 -- # cat 00:15:36.747 14:21:42 -- nvmf/common.sh@544 -- # jq . 00:15:36.747 14:21:42 -- nvmf/common.sh@544 -- # jq . 00:15:36.747 14:21:42 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.747 14:21:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.747 "params": { 00:15:36.747 "name": "Nvme1", 00:15:36.747 "trtype": "tcp", 00:15:36.747 "traddr": "10.0.0.2", 00:15:36.747 "adrfam": "ipv4", 00:15:36.747 "trsvcid": "4420", 00:15:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.747 "hdgst": false, 00:15:36.747 "ddgst": false 00:15:36.747 }, 00:15:36.747 "method": "bdev_nvme_attach_controller" 00:15:36.747 }' 00:15:36.747 14:21:42 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.747 14:21:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.747 "params": { 00:15:36.747 "name": "Nvme1", 00:15:36.747 "trtype": "tcp", 00:15:36.747 "traddr": "10.0.0.2", 00:15:36.747 "adrfam": "ipv4", 00:15:36.747 "trsvcid": "4420", 00:15:36.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.747 "hdgst": false, 00:15:36.747 "ddgst": false 00:15:36.747 }, 00:15:36.747 "method": "bdev_nvme_attach_controller" 00:15:36.747 }' 00:15:36.747 [2024-12-05 14:21:42.375730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:36.747 [2024-12-05 14:21:42.375967] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:37.006 14:21:42 -- target/bdev_io_wait.sh@37 -- # wait 84993 00:15:37.006 [2024-12-05 14:21:42.406168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.006 [2024-12-05 14:21:42.406250] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:37.006 [2024-12-05 14:21:42.422748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.006 [2024-12-05 14:21:42.422883] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:37.006 [2024-12-05 14:21:42.423014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.006 [2024-12-05 14:21:42.423087] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:37.006 [2024-12-05 14:21:42.580384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.265 [2024-12-05 14:21:42.656272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.265 [2024-12-05 14:21:42.670469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:37.265 [2024-12-05 14:21:42.752658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:37.265 [2024-12-05 14:21:42.775486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.265 [2024-12-05 14:21:42.859593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.265 [2024-12-05 14:21:42.864558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:37.524 Running I/O for 1 seconds... 00:15:37.524 Running I/O for 1 seconds... 00:15:37.524 [2024-12-05 14:21:42.935311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:37.524 Running I/O for 1 seconds... 00:15:37.524 Running I/O for 1 seconds... 00:15:38.462 00:15:38.462 Latency(us) 00:15:38.462 [2024-12-05T14:21:44.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.462 [2024-12-05T14:21:44.110Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:38.462 Nvme1n1 : 1.01 10015.02 39.12 0.00 0.00 12724.44 7983.48 32887.16 00:15:38.462 [2024-12-05T14:21:44.110Z] =================================================================================================================== 00:15:38.462 [2024-12-05T14:21:44.110Z] Total : 10015.02 39.12 0.00 0.00 12724.44 7983.48 32887.16 00:15:38.462 00:15:38.462 Latency(us) 00:15:38.462 [2024-12-05T14:21:44.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.462 [2024-12-05T14:21:44.110Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:38.462 Nvme1n1 : 1.01 7573.11 29.58 0.00 0.00 16822.09 8877.15 25976.09 00:15:38.462 [2024-12-05T14:21:44.110Z] =================================================================================================================== 00:15:38.462 [2024-12-05T14:21:44.110Z] Total : 7573.11 29.58 0.00 0.00 16822.09 8877.15 25976.09 00:15:38.462 00:15:38.462 Latency(us) 00:15:38.462 [2024-12-05T14:21:44.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.462 [2024-12-05T14:21:44.110Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:38.462 Nvme1n1 : 1.00 232857.83 909.60 0.00 0.00 547.81 223.42 863.88 00:15:38.462 [2024-12-05T14:21:44.110Z] =================================================================================================================== 00:15:38.462 [2024-12-05T14:21:44.110Z] Total : 232857.83 909.60 0.00 0.00 547.81 223.42 863.88 00:15:38.462 00:15:38.462 Latency(us) 00:15:38.462 [2024-12-05T14:21:44.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.462 [2024-12-05T14:21:44.110Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:38.462 Nvme1n1 : 1.01 5908.86 23.08 0.00 0.00 21560.01 4557.73 34317.03 00:15:38.462 [2024-12-05T14:21:44.110Z] =================================================================================================================== 00:15:38.462 [2024-12-05T14:21:44.110Z] Total : 5908.86 23.08 0.00 0.00 21560.01 4557.73 34317.03 00:15:39.031 14:21:44 -- target/bdev_io_wait.sh@38 -- # wait 84995 00:15:39.031 14:21:44 -- target/bdev_io_wait.sh@39 -- # wait 84997 00:15:39.031 14:21:44 -- target/bdev_io_wait.sh@40 -- # wait 84999 00:15:39.031 14:21:44 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.031 14:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.031 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:39.031 14:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.031 14:21:44 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:39.031 14:21:44 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:39.031 14:21:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:39.031 14:21:44 -- nvmf/common.sh@116 -- # sync 00:15:39.031 14:21:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:39.031 14:21:44 -- nvmf/common.sh@119 -- # set +e 00:15:39.031 14:21:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:39.031 14:21:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:39.031 rmmod nvme_tcp 00:15:39.031 rmmod nvme_fabrics 00:15:39.031 rmmod nvme_keyring 00:15:39.031 14:21:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:39.031 14:21:44 -- nvmf/common.sh@123 -- # set -e 00:15:39.031 14:21:44 -- nvmf/common.sh@124 -- # return 0 00:15:39.031 14:21:44 -- nvmf/common.sh@477 -- # '[' -n 84936 ']' 00:15:39.031 14:21:44 -- nvmf/common.sh@478 -- # killprocess 84936 00:15:39.031 14:21:44 -- common/autotest_common.sh@936 -- # '[' -z 84936 ']' 00:15:39.031 14:21:44 -- common/autotest_common.sh@940 -- # kill -0 84936 00:15:39.031 14:21:44 -- common/autotest_common.sh@941 -- # uname 00:15:39.031 14:21:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.031 14:21:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84936 00:15:39.031 14:21:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.031 14:21:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.031 killing process with pid 84936 00:15:39.031 14:21:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84936' 00:15:39.031 14:21:44 -- common/autotest_common.sh@955 -- # kill 84936 00:15:39.031 14:21:44 -- common/autotest_common.sh@960 -- # wait 84936 00:15:39.290 14:21:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.290 14:21:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.290 14:21:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.290 14:21:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.290 14:21:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.290 14:21:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.290 14:21:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.290 14:21:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.290 14:21:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.290 00:15:39.290 real 0m4.371s 00:15:39.290 user 0m19.312s 00:15:39.290 sys 0m2.156s 00:15:39.290 14:21:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.290 ************************************ 00:15:39.290 END TEST nvmf_bdev_io_wait 00:15:39.290 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:39.290 ************************************ 00:15:39.290 14:21:44 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:39.290 14:21:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.290 14:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.290 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:39.290 ************************************ 00:15:39.290 START TEST nvmf_queue_depth 00:15:39.290 ************************************ 00:15:39.290 14:21:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:39.559 * Looking for test storage... 00:15:39.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.559 14:21:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:39.559 14:21:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:39.559 14:21:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:39.559 14:21:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:39.559 14:21:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:39.559 14:21:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:39.559 14:21:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:39.559 14:21:45 -- scripts/common.sh@335 -- # IFS=.-: 00:15:39.559 14:21:45 -- scripts/common.sh@335 -- # read -ra ver1 00:15:39.559 14:21:45 -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.560 14:21:45 -- scripts/common.sh@336 -- # read -ra ver2 00:15:39.560 14:21:45 -- scripts/common.sh@337 -- # local 'op=<' 00:15:39.560 14:21:45 -- scripts/common.sh@339 -- # ver1_l=2 00:15:39.560 14:21:45 -- scripts/common.sh@340 -- # ver2_l=1 00:15:39.560 14:21:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:39.560 14:21:45 -- scripts/common.sh@343 -- # case "$op" in 00:15:39.560 14:21:45 -- scripts/common.sh@344 -- # : 1 00:15:39.560 14:21:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:39.560 14:21:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.560 14:21:45 -- scripts/common.sh@364 -- # decimal 1 00:15:39.560 14:21:45 -- scripts/common.sh@352 -- # local d=1 00:15:39.560 14:21:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.560 14:21:45 -- scripts/common.sh@354 -- # echo 1 00:15:39.560 14:21:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:39.560 14:21:45 -- scripts/common.sh@365 -- # decimal 2 00:15:39.560 14:21:45 -- scripts/common.sh@352 -- # local d=2 00:15:39.560 14:21:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.560 14:21:45 -- scripts/common.sh@354 -- # echo 2 00:15:39.560 14:21:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:39.560 14:21:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:39.560 14:21:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:39.560 14:21:45 -- scripts/common.sh@367 -- # return 0 00:15:39.560 14:21:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.560 14:21:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:39.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.560 --rc genhtml_branch_coverage=1 00:15:39.560 --rc genhtml_function_coverage=1 00:15:39.560 --rc genhtml_legend=1 00:15:39.560 --rc geninfo_all_blocks=1 00:15:39.560 --rc geninfo_unexecuted_blocks=1 00:15:39.560 00:15:39.560 ' 00:15:39.560 14:21:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:39.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.560 --rc genhtml_branch_coverage=1 00:15:39.560 --rc genhtml_function_coverage=1 00:15:39.560 --rc genhtml_legend=1 00:15:39.560 --rc geninfo_all_blocks=1 00:15:39.560 --rc geninfo_unexecuted_blocks=1 00:15:39.560 00:15:39.560 ' 00:15:39.560 14:21:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:39.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.560 --rc genhtml_branch_coverage=1 00:15:39.560 --rc genhtml_function_coverage=1 00:15:39.560 --rc genhtml_legend=1 00:15:39.560 --rc geninfo_all_blocks=1 00:15:39.560 --rc geninfo_unexecuted_blocks=1 00:15:39.560 00:15:39.560 ' 00:15:39.560 14:21:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:39.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.560 --rc genhtml_branch_coverage=1 00:15:39.560 --rc genhtml_function_coverage=1 00:15:39.560 --rc genhtml_legend=1 00:15:39.560 --rc geninfo_all_blocks=1 00:15:39.560 --rc geninfo_unexecuted_blocks=1 00:15:39.560 00:15:39.560 ' 00:15:39.560 14:21:45 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.560 14:21:45 -- nvmf/common.sh@7 -- # uname -s 00:15:39.560 14:21:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.560 14:21:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.560 14:21:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.560 14:21:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.560 14:21:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.560 14:21:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.560 14:21:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.560 14:21:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.560 14:21:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.560 14:21:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.560 14:21:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:15:39.560 14:21:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:15:39.560 14:21:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.560 14:21:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.560 14:21:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.560 14:21:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.560 14:21:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.560 14:21:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.560 14:21:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.560 14:21:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.560 14:21:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.560 14:21:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.560 14:21:45 -- paths/export.sh@5 -- # export PATH 00:15:39.560 14:21:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.560 14:21:45 -- nvmf/common.sh@46 -- # : 0 00:15:39.560 14:21:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.560 14:21:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.560 14:21:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.560 14:21:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.560 14:21:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.560 14:21:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.560 14:21:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.560 14:21:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.560 14:21:45 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:39.560 14:21:45 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:39.560 14:21:45 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:39.560 14:21:45 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:39.560 14:21:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.560 14:21:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.560 14:21:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.560 14:21:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.560 14:21:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.560 14:21:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.560 14:21:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.560 14:21:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.560 14:21:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:39.560 14:21:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:39.560 14:21:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:39.560 14:21:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:39.560 14:21:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:39.560 14:21:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:39.560 14:21:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.560 14:21:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.560 14:21:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.560 14:21:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:39.560 14:21:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.560 14:21:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.560 14:21:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.560 14:21:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.560 14:21:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.560 14:21:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.560 14:21:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.560 14:21:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.560 14:21:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:39.560 14:21:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:39.560 Cannot find device "nvmf_tgt_br" 00:15:39.560 14:21:45 -- nvmf/common.sh@154 -- # true 00:15:39.560 14:21:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.560 Cannot find device "nvmf_tgt_br2" 00:15:39.560 14:21:45 -- nvmf/common.sh@155 -- # true 00:15:39.560 14:21:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:39.560 14:21:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:39.560 Cannot find device "nvmf_tgt_br" 00:15:39.560 14:21:45 -- nvmf/common.sh@157 -- # true 00:15:39.560 14:21:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:39.560 Cannot find device "nvmf_tgt_br2" 00:15:39.560 14:21:45 -- nvmf/common.sh@158 -- # true 00:15:39.560 14:21:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:39.822 14:21:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:39.822 14:21:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.822 14:21:45 -- nvmf/common.sh@161 -- # true 00:15:39.822 14:21:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.822 14:21:45 -- nvmf/common.sh@162 -- # true 00:15:39.822 14:21:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.822 14:21:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.822 14:21:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.822 14:21:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.822 14:21:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.822 14:21:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.822 14:21:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.822 14:21:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.822 14:21:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.822 14:21:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:39.822 14:21:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:39.822 14:21:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:39.822 14:21:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:39.823 14:21:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.823 14:21:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.823 14:21:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.823 14:21:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:39.823 14:21:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:39.823 14:21:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.823 14:21:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.823 14:21:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.823 14:21:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.823 14:21:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.823 14:21:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:39.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:39.823 00:15:39.823 --- 10.0.0.2 ping statistics --- 00:15:39.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.823 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:39.823 14:21:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:39.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:39.823 00:15:39.823 --- 10.0.0.3 ping statistics --- 00:15:39.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.823 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:39.823 14:21:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:39.823 00:15:39.823 --- 10.0.0.1 ping statistics --- 00:15:39.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.823 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:39.823 14:21:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.823 14:21:45 -- nvmf/common.sh@421 -- # return 0 00:15:39.823 14:21:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:39.823 14:21:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.823 14:21:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:39.823 14:21:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:39.823 14:21:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.823 14:21:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:39.823 14:21:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:39.823 14:21:45 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:39.823 14:21:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:39.823 14:21:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.823 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:15:39.823 14:21:45 -- nvmf/common.sh@469 -- # nvmfpid=85236 00:15:39.823 14:21:45 -- nvmf/common.sh@470 -- # waitforlisten 85236 00:15:39.823 14:21:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.823 14:21:45 -- common/autotest_common.sh@829 -- # '[' -z 85236 ']' 00:15:39.823 14:21:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.823 14:21:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.823 14:21:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.823 14:21:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.823 14:21:45 -- common/autotest_common.sh@10 -- # set +x 00:15:40.081 [2024-12-05 14:21:45.491360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:40.081 [2024-12-05 14:21:45.491439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.081 [2024-12-05 14:21:45.625047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.081 [2024-12-05 14:21:45.701655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.081 [2024-12-05 14:21:45.701794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.081 [2024-12-05 14:21:45.701821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.081 [2024-12-05 14:21:45.701830] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.081 [2024-12-05 14:21:45.701864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.017 14:21:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.017 14:21:46 -- common/autotest_common.sh@862 -- # return 0 00:15:41.017 14:21:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:41.017 14:21:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 14:21:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.017 14:21:46 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.017 14:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 [2024-12-05 14:21:46.478106] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.017 14:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.017 14:21:46 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:41.017 14:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 Malloc0 00:15:41.017 14:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.017 14:21:46 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:41.017 14:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 14:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.017 14:21:46 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.017 14:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 14:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.017 14:21:46 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.017 14:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 [2024-12-05 14:21:46.545310] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.017 14:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.017 14:21:46 -- target/queue_depth.sh@30 -- # bdevperf_pid=85286 00:15:41.017 14:21:46 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.017 14:21:46 -- target/queue_depth.sh@33 -- # waitforlisten 85286 /var/tmp/bdevperf.sock 00:15:41.017 14:21:46 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:41.017 14:21:46 -- common/autotest_common.sh@829 -- # '[' -z 85286 ']' 00:15:41.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.017 14:21:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.017 14:21:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.017 14:21:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.017 14:21:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.017 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 [2024-12-05 14:21:46.596510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:41.017 [2024-12-05 14:21:46.596584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85286 ] 00:15:41.276 [2024-12-05 14:21:46.731761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.276 [2024-12-05 14:21:46.803290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.212 14:21:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.212 14:21:47 -- common/autotest_common.sh@862 -- # return 0 00:15:42.212 14:21:47 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:42.212 14:21:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.212 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:15:42.212 NVMe0n1 00:15:42.212 14:21:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.212 14:21:47 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.212 Running I/O for 10 seconds... 00:15:52.186 00:15:52.186 Latency(us) 00:15:52.186 [2024-12-05T14:21:57.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.186 [2024-12-05T14:21:57.834Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:52.186 Verification LBA range: start 0x0 length 0x4000 00:15:52.186 NVMe0n1 : 10.05 17216.27 67.25 0.00 0.00 59300.27 10604.92 46232.67 00:15:52.186 [2024-12-05T14:21:57.834Z] =================================================================================================================== 00:15:52.186 [2024-12-05T14:21:57.834Z] Total : 17216.27 67.25 0.00 0.00 59300.27 10604.92 46232.67 00:15:52.186 0 00:15:52.186 14:21:57 -- target/queue_depth.sh@39 -- # killprocess 85286 00:15:52.186 14:21:57 -- common/autotest_common.sh@936 -- # '[' -z 85286 ']' 00:15:52.186 14:21:57 -- common/autotest_common.sh@940 -- # kill -0 85286 00:15:52.186 14:21:57 -- common/autotest_common.sh@941 -- # uname 00:15:52.186 14:21:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.186 14:21:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85286 00:15:52.186 14:21:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.186 killing process with pid 85286 00:15:52.186 14:21:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.186 14:21:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85286' 00:15:52.186 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.186 00:15:52.186 Latency(us) 00:15:52.186 [2024-12-05T14:21:57.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.186 [2024-12-05T14:21:57.834Z] =================================================================================================================== 00:15:52.186 [2024-12-05T14:21:57.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.186 14:21:57 -- common/autotest_common.sh@955 -- # kill 85286 00:15:52.186 14:21:57 -- common/autotest_common.sh@960 -- # wait 85286 00:15:52.445 14:21:58 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:52.445 14:21:58 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:52.445 14:21:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:52.445 14:21:58 -- nvmf/common.sh@116 -- # sync 00:15:52.445 14:21:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:52.445 14:21:58 -- nvmf/common.sh@119 -- # set +e 00:15:52.445 14:21:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:52.445 14:21:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:52.445 rmmod nvme_tcp 00:15:52.445 rmmod nvme_fabrics 00:15:52.704 rmmod nvme_keyring 00:15:52.704 14:21:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.704 14:21:58 -- nvmf/common.sh@123 -- # set -e 00:15:52.704 14:21:58 -- nvmf/common.sh@124 -- # return 0 00:15:52.704 14:21:58 -- nvmf/common.sh@477 -- # '[' -n 85236 ']' 00:15:52.704 14:21:58 -- nvmf/common.sh@478 -- # killprocess 85236 00:15:52.704 14:21:58 -- common/autotest_common.sh@936 -- # '[' -z 85236 ']' 00:15:52.704 14:21:58 -- common/autotest_common.sh@940 -- # kill -0 85236 00:15:52.704 14:21:58 -- common/autotest_common.sh@941 -- # uname 00:15:52.704 14:21:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.704 14:21:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85236 00:15:52.704 14:21:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:52.704 14:21:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:52.704 killing process with pid 85236 00:15:52.704 14:21:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85236' 00:15:52.704 14:21:58 -- common/autotest_common.sh@955 -- # kill 85236 00:15:52.704 14:21:58 -- common/autotest_common.sh@960 -- # wait 85236 00:15:52.962 14:21:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:52.962 14:21:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:52.962 14:21:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:52.962 14:21:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.962 14:21:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:52.962 14:21:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.962 14:21:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.962 14:21:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.962 14:21:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:52.962 00:15:52.962 real 0m13.536s 00:15:52.963 user 0m22.184s 00:15:52.963 sys 0m2.736s 00:15:52.963 14:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:52.963 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:15:52.963 ************************************ 00:15:52.963 END TEST nvmf_queue_depth 00:15:52.963 ************************************ 00:15:52.963 14:21:58 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:52.963 14:21:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.963 14:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.963 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:15:52.963 ************************************ 00:15:52.963 START TEST nvmf_multipath 00:15:52.963 ************************************ 00:15:52.963 14:21:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:52.963 * Looking for test storage... 00:15:52.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.963 14:21:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:52.963 14:21:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:52.963 14:21:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:53.222 14:21:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:53.222 14:21:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:53.222 14:21:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:53.222 14:21:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:53.222 14:21:58 -- scripts/common.sh@335 -- # IFS=.-: 00:15:53.222 14:21:58 -- scripts/common.sh@335 -- # read -ra ver1 00:15:53.222 14:21:58 -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.222 14:21:58 -- scripts/common.sh@336 -- # read -ra ver2 00:15:53.222 14:21:58 -- scripts/common.sh@337 -- # local 'op=<' 00:15:53.222 14:21:58 -- scripts/common.sh@339 -- # ver1_l=2 00:15:53.222 14:21:58 -- scripts/common.sh@340 -- # ver2_l=1 00:15:53.222 14:21:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:53.222 14:21:58 -- scripts/common.sh@343 -- # case "$op" in 00:15:53.222 14:21:58 -- scripts/common.sh@344 -- # : 1 00:15:53.222 14:21:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:53.222 14:21:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.222 14:21:58 -- scripts/common.sh@364 -- # decimal 1 00:15:53.222 14:21:58 -- scripts/common.sh@352 -- # local d=1 00:15:53.222 14:21:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.222 14:21:58 -- scripts/common.sh@354 -- # echo 1 00:15:53.222 14:21:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:53.222 14:21:58 -- scripts/common.sh@365 -- # decimal 2 00:15:53.222 14:21:58 -- scripts/common.sh@352 -- # local d=2 00:15:53.222 14:21:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.222 14:21:58 -- scripts/common.sh@354 -- # echo 2 00:15:53.222 14:21:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:53.222 14:21:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:53.222 14:21:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:53.222 14:21:58 -- scripts/common.sh@367 -- # return 0 00:15:53.222 14:21:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.222 14:21:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.222 --rc genhtml_branch_coverage=1 00:15:53.222 --rc genhtml_function_coverage=1 00:15:53.222 --rc genhtml_legend=1 00:15:53.222 --rc geninfo_all_blocks=1 00:15:53.222 --rc geninfo_unexecuted_blocks=1 00:15:53.222 00:15:53.222 ' 00:15:53.222 14:21:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.222 --rc genhtml_branch_coverage=1 00:15:53.222 --rc genhtml_function_coverage=1 00:15:53.222 --rc genhtml_legend=1 00:15:53.222 --rc geninfo_all_blocks=1 00:15:53.222 --rc geninfo_unexecuted_blocks=1 00:15:53.222 00:15:53.222 ' 00:15:53.222 14:21:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.222 --rc genhtml_branch_coverage=1 00:15:53.222 --rc genhtml_function_coverage=1 00:15:53.222 --rc genhtml_legend=1 00:15:53.222 --rc geninfo_all_blocks=1 00:15:53.222 --rc geninfo_unexecuted_blocks=1 00:15:53.222 00:15:53.222 ' 00:15:53.222 14:21:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.222 --rc genhtml_branch_coverage=1 00:15:53.222 --rc genhtml_function_coverage=1 00:15:53.222 --rc genhtml_legend=1 00:15:53.222 --rc geninfo_all_blocks=1 00:15:53.222 --rc geninfo_unexecuted_blocks=1 00:15:53.222 00:15:53.222 ' 00:15:53.222 14:21:58 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.222 14:21:58 -- nvmf/common.sh@7 -- # uname -s 00:15:53.222 14:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.222 14:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.222 14:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.222 14:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.222 14:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.222 14:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.222 14:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.222 14:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.222 14:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.222 14:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.222 14:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:15:53.222 14:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:15:53.222 14:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.222 14:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.222 14:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.222 14:21:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.222 14:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.222 14:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.222 14:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.223 14:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.223 14:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.223 14:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.223 14:21:58 -- paths/export.sh@5 -- # export PATH 00:15:53.223 14:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.223 14:21:58 -- nvmf/common.sh@46 -- # : 0 00:15:53.223 14:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:53.223 14:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:53.223 14:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:53.223 14:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.223 14:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.223 14:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:53.223 14:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:53.223 14:21:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:53.223 14:21:58 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.223 14:21:58 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.223 14:21:58 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:53.223 14:21:58 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.223 14:21:58 -- target/multipath.sh@43 -- # nvmftestinit 00:15:53.223 14:21:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:53.223 14:21:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.223 14:21:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:53.223 14:21:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:53.223 14:21:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:53.223 14:21:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.223 14:21:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.223 14:21:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.223 14:21:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:53.223 14:21:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:53.223 14:21:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:53.223 14:21:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:53.223 14:21:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:53.223 14:21:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:53.223 14:21:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.223 14:21:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.223 14:21:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.223 14:21:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:53.223 14:21:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.223 14:21:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.223 14:21:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.223 14:21:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.223 14:21:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.223 14:21:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.223 14:21:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.223 14:21:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.223 14:21:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:53.223 14:21:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:53.223 Cannot find device "nvmf_tgt_br" 00:15:53.223 14:21:58 -- nvmf/common.sh@154 -- # true 00:15:53.223 14:21:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.223 Cannot find device "nvmf_tgt_br2" 00:15:53.223 14:21:58 -- nvmf/common.sh@155 -- # true 00:15:53.223 14:21:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:53.223 14:21:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:53.223 Cannot find device "nvmf_tgt_br" 00:15:53.223 14:21:58 -- nvmf/common.sh@157 -- # true 00:15:53.223 14:21:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:53.223 Cannot find device "nvmf_tgt_br2" 00:15:53.223 14:21:58 -- nvmf/common.sh@158 -- # true 00:15:53.223 14:21:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:53.223 14:21:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:53.223 14:21:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.223 14:21:58 -- nvmf/common.sh@161 -- # true 00:15:53.223 14:21:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.223 14:21:58 -- nvmf/common.sh@162 -- # true 00:15:53.223 14:21:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.223 14:21:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.223 14:21:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.223 14:21:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.482 14:21:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.482 14:21:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.482 14:21:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.482 14:21:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.482 14:21:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.482 14:21:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:53.482 14:21:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:53.482 14:21:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:53.482 14:21:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:53.482 14:21:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.482 14:21:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.482 14:21:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.482 14:21:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:53.482 14:21:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:53.482 14:21:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.482 14:21:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.482 14:21:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.482 14:21:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.482 14:21:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.482 14:21:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:53.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:15:53.482 00:15:53.482 --- 10.0.0.2 ping statistics --- 00:15:53.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.482 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:53.482 14:21:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:53.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:53.482 00:15:53.482 --- 10.0.0.3 ping statistics --- 00:15:53.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.482 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:53.482 14:21:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:53.482 00:15:53.482 --- 10.0.0.1 ping statistics --- 00:15:53.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.482 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:53.482 14:21:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.482 14:21:59 -- nvmf/common.sh@421 -- # return 0 00:15:53.482 14:21:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:53.482 14:21:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.482 14:21:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:53.482 14:21:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:53.482 14:21:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.482 14:21:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:53.482 14:21:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:53.482 14:21:59 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:53.482 14:21:59 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:53.482 14:21:59 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:53.482 14:21:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:53.482 14:21:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.482 14:21:59 -- common/autotest_common.sh@10 -- # set +x 00:15:53.482 14:21:59 -- nvmf/common.sh@469 -- # nvmfpid=85624 00:15:53.482 14:21:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:53.482 14:21:59 -- nvmf/common.sh@470 -- # waitforlisten 85624 00:15:53.482 14:21:59 -- common/autotest_common.sh@829 -- # '[' -z 85624 ']' 00:15:53.482 14:21:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.482 14:21:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.482 14:21:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.482 14:21:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.483 14:21:59 -- common/autotest_common.sh@10 -- # set +x 00:15:53.483 [2024-12-05 14:21:59.108986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:53.483 [2024-12-05 14:21:59.109228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.741 [2024-12-05 14:21:59.256418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.741 [2024-12-05 14:21:59.326946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.741 [2024-12-05 14:21:59.327340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.741 [2024-12-05 14:21:59.327366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.741 [2024-12-05 14:21:59.327378] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.741 [2024-12-05 14:21:59.327494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.741 [2024-12-05 14:21:59.327637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.741 [2024-12-05 14:21:59.328400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.741 [2024-12-05 14:21:59.328464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.679 14:22:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.679 14:22:00 -- common/autotest_common.sh@862 -- # return 0 00:15:54.679 14:22:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.679 14:22:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.679 14:22:00 -- common/autotest_common.sh@10 -- # set +x 00:15:54.679 14:22:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.679 14:22:00 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.938 [2024-12-05 14:22:00.433904] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.938 14:22:00 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:55.197 Malloc0 00:15:55.197 14:22:00 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:55.455 14:22:00 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.714 14:22:01 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.972 [2024-12-05 14:22:01.436376] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.972 14:22:01 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.231 [2024-12-05 14:22:01.648630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.231 14:22:01 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:56.489 14:22:01 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:56.489 14:22:02 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.489 14:22:02 -- common/autotest_common.sh@1187 -- # local i=0 00:15:56.489 14:22:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.489 14:22:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:56.489 14:22:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:59.022 14:22:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:59.022 14:22:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:59.022 14:22:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.022 14:22:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:59.023 14:22:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.023 14:22:04 -- common/autotest_common.sh@1197 -- # return 0 00:15:59.023 14:22:04 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:59.023 14:22:04 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:59.023 14:22:04 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:59.023 14:22:04 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:59.023 14:22:04 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:59.023 14:22:04 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:59.023 14:22:04 -- target/multipath.sh@38 -- # return 0 00:15:59.023 14:22:04 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:59.023 14:22:04 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:59.023 14:22:04 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:59.023 14:22:04 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:59.023 14:22:04 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:59.023 14:22:04 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:59.023 14:22:04 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:59.023 14:22:04 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:59.023 14:22:04 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.023 14:22:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:59.023 14:22:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:59.023 14:22:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:59.023 14:22:04 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:59.023 14:22:04 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:59.023 14:22:04 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.023 14:22:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:59.023 14:22:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:59.023 14:22:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:59.023 14:22:04 -- target/multipath.sh@85 -- # echo numa 00:15:59.023 14:22:04 -- target/multipath.sh@88 -- # fio_pid=85767 00:15:59.023 14:22:04 -- target/multipath.sh@90 -- # sleep 1 00:15:59.023 14:22:04 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:59.023 [global] 00:15:59.023 thread=1 00:15:59.023 invalidate=1 00:15:59.023 rw=randrw 00:15:59.023 time_based=1 00:15:59.023 runtime=6 00:15:59.023 ioengine=libaio 00:15:59.023 direct=1 00:15:59.023 bs=4096 00:15:59.023 iodepth=128 00:15:59.023 norandommap=0 00:15:59.023 numjobs=1 00:15:59.023 00:15:59.023 verify_dump=1 00:15:59.023 verify_backlog=512 00:15:59.023 verify_state_save=0 00:15:59.023 do_verify=1 00:15:59.023 verify=crc32c-intel 00:15:59.023 [job0] 00:15:59.023 filename=/dev/nvme0n1 00:15:59.023 Could not set queue depth (nvme0n1) 00:15:59.023 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.023 fio-3.35 00:15:59.023 Starting 1 thread 00:15:59.591 14:22:05 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:59.851 14:22:05 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:00.109 14:22:05 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:16:00.109 14:22:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:00.109 14:22:05 -- target/multipath.sh@22 -- # local timeout=20 00:16:00.109 14:22:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:00.109 14:22:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:00.109 14:22:05 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:00.109 14:22:05 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:16:00.109 14:22:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:00.109 14:22:05 -- target/multipath.sh@22 -- # local timeout=20 00:16:00.109 14:22:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:00.109 14:22:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:00.109 14:22:05 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:00.109 14:22:05 -- target/multipath.sh@25 -- # sleep 1s 00:16:01.044 14:22:06 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:01.044 14:22:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:01.044 14:22:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:01.044 14:22:06 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:01.304 14:22:06 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:01.563 14:22:07 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:01.563 14:22:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:01.563 14:22:07 -- target/multipath.sh@22 -- # local timeout=20 00:16:01.563 14:22:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:01.563 14:22:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:01.823 14:22:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:01.823 14:22:07 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:01.823 14:22:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:01.823 14:22:07 -- target/multipath.sh@22 -- # local timeout=20 00:16:01.823 14:22:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:01.823 14:22:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:01.823 14:22:07 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:01.823 14:22:07 -- target/multipath.sh@25 -- # sleep 1s 00:16:02.760 14:22:08 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:02.760 14:22:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:02.760 14:22:08 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:02.760 14:22:08 -- target/multipath.sh@104 -- # wait 85767 00:16:05.293 00:16:05.293 job0: (groupid=0, jobs=1): err= 0: pid=85794: Thu Dec 5 14:22:10 2024 00:16:05.293 read: IOPS=13.2k, BW=51.4MiB/s (53.9MB/s)(309MiB/6004msec) 00:16:05.293 slat (usec): min=5, max=6183, avg=42.66, stdev=191.23 00:16:05.293 clat (usec): min=879, max=13792, avg=6680.38, stdev=1076.19 00:16:05.293 lat (usec): min=939, max=13814, avg=6723.04, stdev=1083.49 00:16:05.293 clat percentiles (usec): 00:16:05.293 | 1.00th=[ 4146], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 5932], 00:16:05.293 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6849], 00:16:05.293 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8586], 00:16:05.293 | 99.00th=[10159], 99.50th=[10683], 99.90th=[11469], 99.95th=[11863], 00:16:05.293 | 99.99th=[12256] 00:16:05.293 bw ( KiB/s): min=13440, max=34408, per=53.76%, avg=28301.18, stdev=7184.79, samples=11 00:16:05.293 iops : min= 3360, max= 8602, avg=7075.27, stdev=1796.19, samples=11 00:16:05.293 write: IOPS=7854, BW=30.7MiB/s (32.2MB/s)(156MiB/5083msec); 0 zone resets 00:16:05.293 slat (usec): min=15, max=4459, avg=56.13, stdev=135.28 00:16:05.293 clat (usec): min=947, max=13261, avg=5846.40, stdev=905.61 00:16:05.293 lat (usec): min=994, max=13305, avg=5902.53, stdev=908.65 00:16:05.293 clat percentiles (usec): 00:16:05.293 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5276], 00:16:05.293 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 5997], 00:16:05.293 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 7111], 00:16:05.293 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[11338], 00:16:05.293 | 99.99th=[12780] 00:16:05.293 bw ( KiB/s): min=13968, max=34320, per=90.03%, avg=28286.45, stdev=6917.97, samples=11 00:16:05.293 iops : min= 3492, max= 8580, avg=7071.55, stdev=1729.45, samples=11 00:16:05.293 lat (usec) : 1000=0.01% 00:16:05.293 lat (msec) : 2=0.02%, 4=1.70%, 10=97.34%, 20=0.93% 00:16:05.293 cpu : usr=6.23%, sys=25.37%, ctx=7268, majf=0, minf=145 00:16:05.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:05.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:05.293 issued rwts: total=79019,39926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:05.293 00:16:05.293 Run status group 0 (all jobs): 00:16:05.293 READ: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=309MiB (324MB), run=6004-6004msec 00:16:05.293 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=156MiB (164MB), run=5083-5083msec 00:16:05.293 00:16:05.293 Disk stats (read/write): 00:16:05.293 nvme0n1: ios=78232/39057, merge=0/0, ticks=488966/211208, in_queue=700174, util=98.58% 00:16:05.293 14:22:10 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:05.293 14:22:10 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:05.551 14:22:10 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:05.551 14:22:10 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:05.551 14:22:10 -- target/multipath.sh@22 -- # local timeout=20 00:16:05.551 14:22:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:05.551 14:22:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:05.551 14:22:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:05.551 14:22:10 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:05.551 14:22:10 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:05.551 14:22:10 -- target/multipath.sh@22 -- # local timeout=20 00:16:05.551 14:22:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:05.551 14:22:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:05.551 14:22:10 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:16:05.551 14:22:10 -- target/multipath.sh@25 -- # sleep 1s 00:16:06.486 14:22:11 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:06.486 14:22:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:06.486 14:22:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:06.486 14:22:11 -- target/multipath.sh@113 -- # echo round-robin 00:16:06.486 14:22:11 -- target/multipath.sh@116 -- # fio_pid=85922 00:16:06.486 14:22:11 -- target/multipath.sh@118 -- # sleep 1 00:16:06.486 14:22:11 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:06.486 [global] 00:16:06.486 thread=1 00:16:06.486 invalidate=1 00:16:06.486 rw=randrw 00:16:06.486 time_based=1 00:16:06.486 runtime=6 00:16:06.486 ioengine=libaio 00:16:06.486 direct=1 00:16:06.486 bs=4096 00:16:06.486 iodepth=128 00:16:06.486 norandommap=0 00:16:06.486 numjobs=1 00:16:06.486 00:16:06.486 verify_dump=1 00:16:06.486 verify_backlog=512 00:16:06.486 verify_state_save=0 00:16:06.486 do_verify=1 00:16:06.486 verify=crc32c-intel 00:16:06.486 [job0] 00:16:06.486 filename=/dev/nvme0n1 00:16:06.486 Could not set queue depth (nvme0n1) 00:16:06.486 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.486 fio-3.35 00:16:06.486 Starting 1 thread 00:16:07.421 14:22:12 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:07.680 14:22:13 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:07.939 14:22:13 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:07.939 14:22:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:07.939 14:22:13 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.939 14:22:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:07.939 14:22:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:07.939 14:22:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:07.939 14:22:13 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:07.939 14:22:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:07.939 14:22:13 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.939 14:22:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:07.939 14:22:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:07.939 14:22:13 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:07.939 14:22:13 -- target/multipath.sh@25 -- # sleep 1s 00:16:08.875 14:22:14 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:08.875 14:22:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:08.875 14:22:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:08.875 14:22:14 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:09.134 14:22:14 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:09.393 14:22:14 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:09.393 14:22:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:09.393 14:22:14 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.393 14:22:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:09.393 14:22:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:09.393 14:22:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:09.393 14:22:14 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:09.393 14:22:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:09.393 14:22:14 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.393 14:22:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:09.393 14:22:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:09.393 14:22:14 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:09.393 14:22:14 -- target/multipath.sh@25 -- # sleep 1s 00:16:10.767 14:22:15 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:10.767 14:22:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:10.767 14:22:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:10.767 14:22:15 -- target/multipath.sh@132 -- # wait 85922 00:16:12.708 00:16:12.708 job0: (groupid=0, jobs=1): err= 0: pid=85943: Thu Dec 5 14:22:18 2024 00:16:12.708 read: IOPS=12.8k, BW=50.2MiB/s (52.6MB/s)(301MiB/6002msec) 00:16:12.708 slat (usec): min=4, max=5746, avg=39.08, stdev=183.10 00:16:12.708 clat (usec): min=593, max=16758, avg=6881.29, stdev=1662.10 00:16:12.708 lat (usec): min=613, max=16767, avg=6920.38, stdev=1665.69 00:16:12.708 clat percentiles (usec): 00:16:12.708 | 1.00th=[ 2606], 5.00th=[ 4228], 10.00th=[ 5342], 20.00th=[ 5997], 00:16:12.708 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 7046], 00:16:12.708 | 70.00th=[ 7373], 80.00th=[ 7767], 90.00th=[ 8717], 95.00th=[ 9896], 00:16:12.708 | 99.00th=[12387], 99.50th=[13173], 99.90th=[14484], 99.95th=[15401], 00:16:12.708 | 99.99th=[15926] 00:16:12.708 bw ( KiB/s): min= 9080, max=35208, per=53.39%, avg=27425.45, stdev=8880.55, samples=11 00:16:12.708 iops : min= 2270, max= 8802, avg=6856.36, stdev=2220.14, samples=11 00:16:12.708 write: IOPS=7691, BW=30.0MiB/s (31.5MB/s)(153MiB/5093msec); 0 zone resets 00:16:12.708 slat (usec): min=15, max=2028, avg=51.02, stdev=120.01 00:16:12.708 clat (usec): min=412, max=13511, avg=5900.35, stdev=1566.01 00:16:12.708 lat (usec): min=452, max=13556, avg=5951.37, stdev=1568.55 00:16:12.708 clat percentiles (usec): 00:16:12.708 | 1.00th=[ 1844], 5.00th=[ 2933], 10.00th=[ 3687], 20.00th=[ 5145], 00:16:12.709 | 30.00th=[ 5538], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6194], 00:16:12.709 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7439], 95.00th=[ 8586], 00:16:12.709 | 99.00th=[10421], 99.50th=[11076], 99.90th=[12256], 99.95th=[12649], 00:16:12.709 | 99.99th=[13173] 00:16:12.709 bw ( KiB/s): min= 9264, max=34576, per=89.17%, avg=27435.64, stdev=8644.54, samples=11 00:16:12.709 iops : min= 2316, max= 8644, avg=6858.91, stdev=2161.14, samples=11 00:16:12.709 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:16:12.709 lat (msec) : 2=0.68%, 4=6.28%, 10=89.34%, 20=3.67% 00:16:12.709 cpu : usr=6.02%, sys=25.05%, ctx=7283, majf=0, minf=127 00:16:12.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:12.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.709 issued rwts: total=77072,39174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.709 00:16:12.709 Run status group 0 (all jobs): 00:16:12.709 READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=301MiB (316MB), run=6002-6002msec 00:16:12.709 WRITE: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=153MiB (160MB), run=5093-5093msec 00:16:12.709 00:16:12.709 Disk stats (read/write): 00:16:12.709 nvme0n1: ios=75120/39174, merge=0/0, ticks=485891/215635, in_queue=701526, util=98.55% 00:16:12.709 14:22:18 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:12.709 14:22:18 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.709 14:22:18 -- common/autotest_common.sh@1208 -- # local i=0 00:16:12.709 14:22:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:12.709 14:22:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.709 14:22:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:12.709 14:22:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.982 14:22:18 -- common/autotest_common.sh@1220 -- # return 0 00:16:12.982 14:22:18 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.242 14:22:18 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:13.242 14:22:18 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:13.242 14:22:18 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:13.242 14:22:18 -- target/multipath.sh@144 -- # nvmftestfini 00:16:13.242 14:22:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.242 14:22:18 -- nvmf/common.sh@116 -- # sync 00:16:13.242 14:22:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.242 14:22:18 -- nvmf/common.sh@119 -- # set +e 00:16:13.242 14:22:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.242 14:22:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.242 rmmod nvme_tcp 00:16:13.242 rmmod nvme_fabrics 00:16:13.242 rmmod nvme_keyring 00:16:13.242 14:22:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.242 14:22:18 -- nvmf/common.sh@123 -- # set -e 00:16:13.242 14:22:18 -- nvmf/common.sh@124 -- # return 0 00:16:13.242 14:22:18 -- nvmf/common.sh@477 -- # '[' -n 85624 ']' 00:16:13.242 14:22:18 -- nvmf/common.sh@478 -- # killprocess 85624 00:16:13.242 14:22:18 -- common/autotest_common.sh@936 -- # '[' -z 85624 ']' 00:16:13.242 14:22:18 -- common/autotest_common.sh@940 -- # kill -0 85624 00:16:13.242 14:22:18 -- common/autotest_common.sh@941 -- # uname 00:16:13.242 14:22:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.242 14:22:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85624 00:16:13.242 killing process with pid 85624 00:16:13.242 14:22:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.242 14:22:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.242 14:22:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85624' 00:16:13.242 14:22:18 -- common/autotest_common.sh@955 -- # kill 85624 00:16:13.242 14:22:18 -- common/autotest_common.sh@960 -- # wait 85624 00:16:13.502 14:22:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.502 14:22:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.502 14:22:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.502 14:22:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.502 14:22:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.502 14:22:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.502 14:22:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.502 14:22:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.502 14:22:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:13.502 00:16:13.502 real 0m20.631s 00:16:13.502 user 1m20.930s 00:16:13.502 sys 0m6.408s 00:16:13.502 14:22:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:13.502 14:22:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.502 ************************************ 00:16:13.502 END TEST nvmf_multipath 00:16:13.502 ************************************ 00:16:13.761 14:22:19 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.762 14:22:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.762 14:22:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.762 14:22:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.762 ************************************ 00:16:13.762 START TEST nvmf_zcopy 00:16:13.762 ************************************ 00:16:13.762 14:22:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.762 * Looking for test storage... 00:16:13.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:13.762 14:22:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:13.762 14:22:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:13.762 14:22:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:13.762 14:22:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:13.762 14:22:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:13.762 14:22:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:13.762 14:22:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:13.762 14:22:19 -- scripts/common.sh@335 -- # IFS=.-: 00:16:13.762 14:22:19 -- scripts/common.sh@335 -- # read -ra ver1 00:16:13.762 14:22:19 -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.762 14:22:19 -- scripts/common.sh@336 -- # read -ra ver2 00:16:13.762 14:22:19 -- scripts/common.sh@337 -- # local 'op=<' 00:16:13.762 14:22:19 -- scripts/common.sh@339 -- # ver1_l=2 00:16:13.762 14:22:19 -- scripts/common.sh@340 -- # ver2_l=1 00:16:13.762 14:22:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:13.762 14:22:19 -- scripts/common.sh@343 -- # case "$op" in 00:16:13.762 14:22:19 -- scripts/common.sh@344 -- # : 1 00:16:13.762 14:22:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:13.762 14:22:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.762 14:22:19 -- scripts/common.sh@364 -- # decimal 1 00:16:13.762 14:22:19 -- scripts/common.sh@352 -- # local d=1 00:16:13.762 14:22:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.762 14:22:19 -- scripts/common.sh@354 -- # echo 1 00:16:13.762 14:22:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:13.762 14:22:19 -- scripts/common.sh@365 -- # decimal 2 00:16:13.762 14:22:19 -- scripts/common.sh@352 -- # local d=2 00:16:13.762 14:22:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.762 14:22:19 -- scripts/common.sh@354 -- # echo 2 00:16:13.762 14:22:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:13.762 14:22:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:13.762 14:22:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:13.762 14:22:19 -- scripts/common.sh@367 -- # return 0 00:16:13.762 14:22:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.762 14:22:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:13.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.762 --rc genhtml_branch_coverage=1 00:16:13.762 --rc genhtml_function_coverage=1 00:16:13.762 --rc genhtml_legend=1 00:16:13.762 --rc geninfo_all_blocks=1 00:16:13.762 --rc geninfo_unexecuted_blocks=1 00:16:13.762 00:16:13.762 ' 00:16:13.762 14:22:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:13.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.762 --rc genhtml_branch_coverage=1 00:16:13.762 --rc genhtml_function_coverage=1 00:16:13.762 --rc genhtml_legend=1 00:16:13.762 --rc geninfo_all_blocks=1 00:16:13.762 --rc geninfo_unexecuted_blocks=1 00:16:13.762 00:16:13.762 ' 00:16:13.762 14:22:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:13.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.762 --rc genhtml_branch_coverage=1 00:16:13.762 --rc genhtml_function_coverage=1 00:16:13.762 --rc genhtml_legend=1 00:16:13.762 --rc geninfo_all_blocks=1 00:16:13.762 --rc geninfo_unexecuted_blocks=1 00:16:13.762 00:16:13.762 ' 00:16:13.762 14:22:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:13.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.762 --rc genhtml_branch_coverage=1 00:16:13.762 --rc genhtml_function_coverage=1 00:16:13.762 --rc genhtml_legend=1 00:16:13.762 --rc geninfo_all_blocks=1 00:16:13.762 --rc geninfo_unexecuted_blocks=1 00:16:13.762 00:16:13.762 ' 00:16:13.762 14:22:19 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.762 14:22:19 -- nvmf/common.sh@7 -- # uname -s 00:16:13.762 14:22:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.762 14:22:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.762 14:22:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.762 14:22:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.762 14:22:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.762 14:22:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.762 14:22:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.762 14:22:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.762 14:22:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.762 14:22:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.762 14:22:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:16:13.762 14:22:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:16:13.762 14:22:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.762 14:22:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.762 14:22:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.762 14:22:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.762 14:22:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.762 14:22:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.762 14:22:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.762 14:22:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.762 14:22:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.762 14:22:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.762 14:22:19 -- paths/export.sh@5 -- # export PATH 00:16:13.762 14:22:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.762 14:22:19 -- nvmf/common.sh@46 -- # : 0 00:16:13.762 14:22:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:13.762 14:22:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:13.762 14:22:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:13.762 14:22:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.762 14:22:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.762 14:22:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:13.762 14:22:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:13.762 14:22:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:13.762 14:22:19 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:13.762 14:22:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:13.762 14:22:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.762 14:22:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.021 14:22:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.021 14:22:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.021 14:22:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.021 14:22:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.021 14:22:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.021 14:22:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.021 14:22:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.021 14:22:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.021 14:22:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.021 14:22:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.021 14:22:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.021 14:22:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.021 14:22:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.021 14:22:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.021 14:22:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.022 14:22:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.022 14:22:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.022 14:22:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.022 14:22:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.022 14:22:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.022 14:22:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.022 14:22:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.022 14:22:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.022 14:22:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.022 14:22:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.022 Cannot find device "nvmf_tgt_br" 00:16:14.022 14:22:19 -- nvmf/common.sh@154 -- # true 00:16:14.022 14:22:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.022 Cannot find device "nvmf_tgt_br2" 00:16:14.022 14:22:19 -- nvmf/common.sh@155 -- # true 00:16:14.022 14:22:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.022 14:22:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.022 Cannot find device "nvmf_tgt_br" 00:16:14.022 14:22:19 -- nvmf/common.sh@157 -- # true 00:16:14.022 14:22:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.022 Cannot find device "nvmf_tgt_br2" 00:16:14.022 14:22:19 -- nvmf/common.sh@158 -- # true 00:16:14.022 14:22:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.022 14:22:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.022 14:22:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.022 14:22:19 -- nvmf/common.sh@161 -- # true 00:16:14.022 14:22:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.022 14:22:19 -- nvmf/common.sh@162 -- # true 00:16:14.022 14:22:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.022 14:22:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.022 14:22:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.022 14:22:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.022 14:22:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.022 14:22:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.022 14:22:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.022 14:22:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.022 14:22:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.022 14:22:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.022 14:22:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.022 14:22:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.022 14:22:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.022 14:22:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.022 14:22:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.022 14:22:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.022 14:22:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.022 14:22:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.281 14:22:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.281 14:22:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.281 14:22:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.281 14:22:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.281 14:22:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.281 14:22:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:14.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:14.281 00:16:14.281 --- 10.0.0.2 ping statistics --- 00:16:14.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.281 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:14.281 14:22:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:14.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:14.281 00:16:14.281 --- 10.0.0.3 ping statistics --- 00:16:14.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.281 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:14.281 14:22:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:14.281 00:16:14.281 --- 10.0.0.1 ping statistics --- 00:16:14.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.281 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:14.281 14:22:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.281 14:22:19 -- nvmf/common.sh@421 -- # return 0 00:16:14.281 14:22:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.281 14:22:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.281 14:22:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.281 14:22:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.281 14:22:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.281 14:22:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.281 14:22:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.281 14:22:19 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:14.281 14:22:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.281 14:22:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.281 14:22:19 -- common/autotest_common.sh@10 -- # set +x 00:16:14.281 14:22:19 -- nvmf/common.sh@469 -- # nvmfpid=86235 00:16:14.281 14:22:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:14.281 14:22:19 -- nvmf/common.sh@470 -- # waitforlisten 86235 00:16:14.281 14:22:19 -- common/autotest_common.sh@829 -- # '[' -z 86235 ']' 00:16:14.281 14:22:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.281 14:22:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.281 14:22:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.281 14:22:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.281 14:22:19 -- common/autotest_common.sh@10 -- # set +x 00:16:14.281 [2024-12-05 14:22:19.823241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.281 [2024-12-05 14:22:19.823320] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.541 [2024-12-05 14:22:19.961435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.541 [2024-12-05 14:22:20.026735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.541 [2024-12-05 14:22:20.026922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.541 [2024-12-05 14:22:20.026941] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.541 [2024-12-05 14:22:20.026952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.541 [2024-12-05 14:22:20.026993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.109 14:22:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.109 14:22:20 -- common/autotest_common.sh@862 -- # return 0 00:16:15.109 14:22:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.109 14:22:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.109 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 14:22:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.367 14:22:20 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:15.367 14:22:20 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:15.367 14:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.367 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 [2024-12-05 14:22:20.774018] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.367 14:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.367 14:22:20 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.367 14:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.367 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 14:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.367 14:22:20 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.368 14:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.368 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.368 [2024-12-05 14:22:20.794161] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.368 14:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.368 14:22:20 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.368 14:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.368 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.368 14:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.368 14:22:20 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:15.368 14:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.368 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.368 malloc0 00:16:15.368 14:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.368 14:22:20 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.368 14:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.368 14:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:15.368 14:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.368 14:22:20 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:15.368 14:22:20 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:15.368 14:22:20 -- nvmf/common.sh@520 -- # config=() 00:16:15.368 14:22:20 -- nvmf/common.sh@520 -- # local subsystem config 00:16:15.368 14:22:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:15.368 14:22:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:15.368 { 00:16:15.368 "params": { 00:16:15.368 "name": "Nvme$subsystem", 00:16:15.368 "trtype": "$TEST_TRANSPORT", 00:16:15.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.368 "adrfam": "ipv4", 00:16:15.368 "trsvcid": "$NVMF_PORT", 00:16:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.368 "hdgst": ${hdgst:-false}, 00:16:15.368 "ddgst": ${ddgst:-false} 00:16:15.368 }, 00:16:15.368 "method": "bdev_nvme_attach_controller" 00:16:15.368 } 00:16:15.368 EOF 00:16:15.368 )") 00:16:15.368 14:22:20 -- nvmf/common.sh@542 -- # cat 00:16:15.368 14:22:20 -- nvmf/common.sh@544 -- # jq . 00:16:15.368 14:22:20 -- nvmf/common.sh@545 -- # IFS=, 00:16:15.368 14:22:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:15.368 "params": { 00:16:15.368 "name": "Nvme1", 00:16:15.368 "trtype": "tcp", 00:16:15.368 "traddr": "10.0.0.2", 00:16:15.368 "adrfam": "ipv4", 00:16:15.368 "trsvcid": "4420", 00:16:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.368 "hdgst": false, 00:16:15.368 "ddgst": false 00:16:15.368 }, 00:16:15.368 "method": "bdev_nvme_attach_controller" 00:16:15.368 }' 00:16:15.368 [2024-12-05 14:22:20.887422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:15.368 [2024-12-05 14:22:20.887515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86286 ] 00:16:15.627 [2024-12-05 14:22:21.029966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.627 [2024-12-05 14:22:21.110475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.886 Running I/O for 10 seconds... 00:16:25.863 00:16:25.863 Latency(us) 00:16:25.863 [2024-12-05T14:22:31.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.863 [2024-12-05T14:22:31.511Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:25.863 Verification LBA range: start 0x0 length 0x1000 00:16:25.864 Nvme1n1 : 10.01 11110.03 86.80 0.00 0.00 11493.47 897.40 18707.55 00:16:25.864 [2024-12-05T14:22:31.512Z] =================================================================================================================== 00:16:25.864 [2024-12-05T14:22:31.512Z] Total : 11110.03 86.80 0.00 0.00 11493.47 897.40 18707.55 00:16:26.123 14:22:31 -- target/zcopy.sh@39 -- # perfpid=86399 00:16:26.123 14:22:31 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:26.123 14:22:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.123 14:22:31 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:26.123 14:22:31 -- nvmf/common.sh@520 -- # config=() 00:16:26.123 14:22:31 -- nvmf/common.sh@520 -- # local subsystem config 00:16:26.123 14:22:31 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:26.123 14:22:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:26.123 14:22:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:26.123 { 00:16:26.123 "params": { 00:16:26.123 "name": "Nvme$subsystem", 00:16:26.123 "trtype": "$TEST_TRANSPORT", 00:16:26.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.123 "adrfam": "ipv4", 00:16:26.123 "trsvcid": "$NVMF_PORT", 00:16:26.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.123 "hdgst": ${hdgst:-false}, 00:16:26.123 "ddgst": ${ddgst:-false} 00:16:26.123 }, 00:16:26.123 "method": "bdev_nvme_attach_controller" 00:16:26.123 } 00:16:26.123 EOF 00:16:26.123 )") 00:16:26.123 14:22:31 -- nvmf/common.sh@542 -- # cat 00:16:26.123 [2024-12-05 14:22:31.521070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.521125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 14:22:31 -- nvmf/common.sh@544 -- # jq . 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 14:22:31 -- nvmf/common.sh@545 -- # IFS=, 00:16:26.123 14:22:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:26.123 "params": { 00:16:26.123 "name": "Nvme1", 00:16:26.123 "trtype": "tcp", 00:16:26.123 "traddr": "10.0.0.2", 00:16:26.123 "adrfam": "ipv4", 00:16:26.123 "trsvcid": "4420", 00:16:26.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.123 "hdgst": false, 00:16:26.123 "ddgst": false 00:16:26.123 }, 00:16:26.123 "method": "bdev_nvme_attach_controller" 00:16:26.123 }' 00:16:26.123 [2024-12-05 14:22:31.533020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.533043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.545013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.545033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.557018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.557038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.569016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.569035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 [2024-12-05 14:22:31.572277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:26.123 [2024-12-05 14:22:31.572377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86399 ] 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.581022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.581044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.593018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.593037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.605022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.123 [2024-12-05 14:22:31.605041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.123 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.123 [2024-12-05 14:22:31.617026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.617044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.629027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.629046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.641032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.641050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.653036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.653055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.665040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.665059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.677044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.677063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.689056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.689077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.701049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.701067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.710511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.124 [2024-12-05 14:22:31.713050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.713068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.725054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.725074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.737056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.737074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.749060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.749080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.761063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.124 [2024-12-05 14:22:31.761082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.124 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.124 [2024-12-05 14:22:31.766889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.384 [2024-12-05 14:22:31.773101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.773138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.785070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.785091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.797074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.797092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.809077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.809097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.821077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.821095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.833079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.833097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.845084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.845102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.857087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.857105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.869112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.869135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.881108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.881130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.893114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.893136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.905121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.905144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.917122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.917145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.929136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.929160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 Running I/O for 5 seconds... 00:16:26.384 [2024-12-05 14:22:31.945370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.945396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.957396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.957422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.972043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.972070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:31.988416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:31.988442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:32.004678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:32.004705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.384 [2024-12-05 14:22:32.020625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.384 [2024-12-05 14:22:32.020651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.384 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.032135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.032162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.048264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.048299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.063976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.064032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.080066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.080090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.090999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.091024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.106939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.106965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.122748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.122773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.134960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.134984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.150721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.150746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.166863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.166887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.183454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.183479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.199512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.199537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.211151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.211176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.227108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.227134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.237526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.237552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.253682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.253707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.269498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.269524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.644 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.644 [2024-12-05 14:22:32.285964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.644 [2024-12-05 14:22:32.286005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.302254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.302280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.318159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.318186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.333121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.333148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.343302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.343328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.351151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.351176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.361873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.361899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.370711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.370736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.379423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.379449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.388276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.388302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.397387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.397412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.406352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.406378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.415708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.415733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.423222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.423247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.434456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.434482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.442563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.442588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.452877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.452902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.461057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.461083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.469679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.469704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.478239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.478264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.487306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.487331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.496448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.904 [2024-12-05 14:22:32.496473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.904 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.904 [2024-12-05 14:22:32.505222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.905 [2024-12-05 14:22:32.505248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.905 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.905 [2024-12-05 14:22:32.514001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.905 [2024-12-05 14:22:32.514026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.905 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.905 [2024-12-05 14:22:32.522714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.905 [2024-12-05 14:22:32.522740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.905 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.905 [2024-12-05 14:22:32.531264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.905 [2024-12-05 14:22:32.531290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.905 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.905 [2024-12-05 14:22:32.539914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.905 [2024-12-05 14:22:32.539938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.905 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.164 [2024-12-05 14:22:32.549032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.164 [2024-12-05 14:22:32.549059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.558501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.558526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.568500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.568526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.576525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.576550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.585784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.585820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.594846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.594869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.606326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.606351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.615447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.615472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.622680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.622705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.634004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.634030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.642189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.642215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.652572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.652597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.662211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.662237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.669400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.669425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.680698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.680724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.689076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.689101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.697458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.697483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.705853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.705878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.714311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.714336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.722866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.722890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.731530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.731556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.740451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.740476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.749012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.749037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.757265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.757290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.766081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.766107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.774990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.775014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.783573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.783597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.165 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.165 [2024-12-05 14:22:32.792907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.165 [2024-12-05 14:22:32.792932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.166 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.166 [2024-12-05 14:22:32.801871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.166 [2024-12-05 14:22:32.801896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.166 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.812523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.812549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.823411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.823437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.831570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.831596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.842700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.842728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.850895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.850919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.861664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.861690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.871201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.871227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.878928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.878953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.890272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.890298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.901298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.901324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.908574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.908599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.919736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.919762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.928214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.928240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.425 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.425 [2024-12-05 14:22:32.936600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.425 [2024-12-05 14:22:32.936625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:32.945450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:32.945476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:32.954145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:32.954171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:32.962484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:32.962509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:32.976245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:32.976271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:32.992098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:32.992123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.008082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.008108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.019574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.019599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.027583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.027607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.038278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.038304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.046259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.046285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.056727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.056753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.426 [2024-12-05 14:22:33.064979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.426 [2024-12-05 14:22:33.065003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.426 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.075441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.075468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.083969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.084044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.093130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.093155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.101645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.101670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.109836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.109861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.118529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.118554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.128358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.128383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.136830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.136866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.145551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.145578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.154216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.154241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.162716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.162742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.171296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.171321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.179834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.179859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.188467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.188492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.197241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.197266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.205676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.205701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.214142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.214168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.222851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.222876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.231267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.231293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.239815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.239839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.248581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.248606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.257159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.257184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.686 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.686 [2024-12-05 14:22:33.265842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.686 [2024-12-05 14:22:33.265867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.274861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.274886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.283468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.283493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.292023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.292049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.300789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.300826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.309167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.309192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.318378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.318404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.687 [2024-12-05 14:22:33.327954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.687 [2024-12-05 14:22:33.327979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.687 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.337984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.338009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.347072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.347099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.355926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.355967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.364476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.364503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.373341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.373367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.382255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.382281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.390740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.390765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.399706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.399733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.408193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.408219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.417051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.417076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.425951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.425976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.434843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.434868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.443563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.443589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.452370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.452396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.461569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.461596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.470442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.470467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.479646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.479672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.488612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.488637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.947 [2024-12-05 14:22:33.502337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.947 [2024-12-05 14:22:33.502363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.947 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.510761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.510787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.524519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.524545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.532658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.532683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.543023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.543050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.550510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.550535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.561713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.561740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.570056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.570082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.578885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.578910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.948 [2024-12-05 14:22:33.587463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.948 [2024-12-05 14:22:33.587490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.948 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.597632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.597658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.606200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.606225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.617749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.617775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.626069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.626095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.636702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.636727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.647712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.647737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.655073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.655098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.665818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.665843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.674004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.674030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.684559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.684584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.692735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.692761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.703684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.703710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.715008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.715033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.724149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.724177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.731865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.731890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.742505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.742530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.750790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.750826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.759356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.759382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.768203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.768230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.776789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.776827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.785496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.785521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.793983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.794007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.802957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.802982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.811590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.811616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.820627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.820652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.208 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.208 [2024-12-05 14:22:33.833454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.208 [2024-12-05 14:22:33.833480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-12-05 14:22:33.841053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-12-05 14:22:33.841080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-12-05 14:22:33.852666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-12-05 14:22:33.852709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.861814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.861838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.871745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.871770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.879229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.879255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.890256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.890282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.904987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.905012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.916413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.916438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.468 [2024-12-05 14:22:33.932035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.468 [2024-12-05 14:22:33.932060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.468 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.943400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.943426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.951369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.951394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.961818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.961843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.969437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.969463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.979733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.979759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.988026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.988053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:33.998024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:33.998050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.006201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.006227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.016520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.016546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.024613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.024638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.033458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.033483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.043585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.043611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.051470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.051496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.062941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.062966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.071098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.071124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.079796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.079841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.095405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.095430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.469 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.469 [2024-12-05 14:22:34.110646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.469 [2024-12-05 14:22:34.110672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.121705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.121731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.136783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.136818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.148040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.148066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.156039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.156064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.166637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.166662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.174567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.174592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.183171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.183197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.192945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.729 [2024-12-05 14:22:34.192969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.729 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.729 [2024-12-05 14:22:34.200541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.200566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.211482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.211508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.221016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.221041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.228171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.228203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.239293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.239317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.248562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.248587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.255665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.255689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.266895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.266920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.274737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.274762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.285108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.285134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.293689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.293714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.302460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.302485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.311317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.311342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.319932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.319957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.328974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.328999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.337547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.337572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.346238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.346263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.355096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.355121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.363634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.363659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.730 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.730 [2024-12-05 14:22:34.372897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.730 [2024-12-05 14:22:34.372949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.382266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.382291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.390819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.390842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.399564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.399590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.408281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.408307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.422499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.422524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.432795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.432829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.440042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.440068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.450548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.450574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.459205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.459241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.468094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.468121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.477030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.477055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.485902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.485944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.494584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.494610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.503473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.503499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.990 [2024-12-05 14:22:34.512731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.990 [2024-12-05 14:22:34.512756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.990 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.521837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.521862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.530722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.530748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.544265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.544291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.551599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.551624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.567334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.567360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.578097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.578122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.585696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.585721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.597237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.597263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.605930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.605955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.614756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.614782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.991 [2024-12-05 14:22:34.628730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.991 [2024-12-05 14:22:34.628757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.991 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.637833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.637858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.250 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.647313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.647339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.250 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.655687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.655713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.250 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.664244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.664270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.250 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.672864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.672889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.250 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.681614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.681640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.250 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.250 [2024-12-05 14:22:34.690165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.250 [2024-12-05 14:22:34.690190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.698848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.698872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.707421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.707447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.715938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.715963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.724360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.724385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.733026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.733052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.741779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.741815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.750093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.750118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.758714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.758739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.768332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.768357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.776575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.776600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.786783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.786827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.796331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.796357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.803392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.803416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.814582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.814608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.822841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.822878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.832972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.832996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.841477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.841503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.855783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.855818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.864240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.864296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.873457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.873483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.884149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.884175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.251 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.251 [2024-12-05 14:22:34.894050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.251 [2024-12-05 14:22:34.894075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.902369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.902394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.917356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.917382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.925774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.925800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.941256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.941281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.957497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.957523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.972684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.972710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.981013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.981038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:34.996403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:34.996430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:35.012580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:35.012605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.511 [2024-12-05 14:22:35.023671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.511 [2024-12-05 14:22:35.023697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.511 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.031635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.031661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.042374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.042399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.050850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.050875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.059314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.059339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.067460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.067485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.076007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.076048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.084437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.084462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.092857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.092881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.101352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.101377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.115156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.115181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.123368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.123395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.133463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.133488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.512 [2024-12-05 14:22:35.149873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.512 [2024-12-05 14:22:35.149898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.512 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.161454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.161479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.169770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.169795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.179834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.179858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.189219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.189244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.196862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.196887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.207813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.207837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.215763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.215788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.224332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.224357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.232831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.232866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.241429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.241454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.250307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.771 [2024-12-05 14:22:35.250332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.771 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.771 [2024-12-05 14:22:35.258961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.258985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.268436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.268462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.276479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.276505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.286974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.286998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.295252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.295276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.306554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.306580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.314452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.314478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.324785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.324826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.332876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.332902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.343208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.343234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.351198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.351224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.361728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.361753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.370775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.370801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.378327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.378352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.389358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.389383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.396854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.396878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.407918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.407943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.772 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.772 [2024-12-05 14:22:35.416558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.772 [2024-12-05 14:22:35.416585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.425592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.425618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.434447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.434474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.448474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.448500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.456338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.456363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.464794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.464838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.473021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.473045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.481426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.481451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.489913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.489938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.499624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.499648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.507516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.507541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.517693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.517718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.525059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.525084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.535881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.535907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.544097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.544124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.554144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.554170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.561950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.561974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.572581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.572606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.580562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.580587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.590818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.590849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.599028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.599052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.608550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.608575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.617768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.617793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.031 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.031 [2024-12-05 14:22:35.626934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.031 [2024-12-05 14:22:35.626969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.032 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.032 [2024-12-05 14:22:35.636081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.032 [2024-12-05 14:22:35.636108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.032 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.032 [2024-12-05 14:22:35.645197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.032 [2024-12-05 14:22:35.645222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.032 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.032 [2024-12-05 14:22:35.653984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.032 [2024-12-05 14:22:35.654009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.032 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.032 [2024-12-05 14:22:35.662904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.032 [2024-12-05 14:22:35.662930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.032 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.032 [2024-12-05 14:22:35.671569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.032 [2024-12-05 14:22:35.671594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.032 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.681637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.681663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.690632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.690657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.699360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.699385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.708236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.708262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.717141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.717166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.726003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.726028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.734972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.734997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.744192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.744217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.752631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.290 [2024-12-05 14:22:35.752657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.290 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.290 [2024-12-05 14:22:35.761506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.761532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.769985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.770009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.778746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.778771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.787187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.787211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.796134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.796161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.804521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.804546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.813019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.813043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.821496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.821521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.830254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.830279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.838907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.838932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.847545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.847569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.861871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.861895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.870871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.870896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.881901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.881927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.897724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.897749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.913839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.913863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.291 [2024-12-05 14:22:35.929818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.291 [2024-12-05 14:22:35.929842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.291 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:35.941475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:35.941499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:35.957602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:35.957627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:35.967852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:35.967877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:35.984130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:35.984158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:35.994870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:35.994894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.003919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.003945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.013020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.013045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.020899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.020923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.031386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.031411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.039549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.039574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.049872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.049896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.058034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.058060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.068395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.068420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.550 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.550 [2024-12-05 14:22:36.076248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.550 [2024-12-05 14:22:36.076274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.087469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.087495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.097880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.097904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.105018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.105043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.116095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.116131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.124347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.124372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.134330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.134356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.142512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.142537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.153071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.153096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.161086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.161111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.171662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.171689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.179722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.179749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.551 [2024-12-05 14:22:36.189897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.551 [2024-12-05 14:22:36.189922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.551 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.810 [2024-12-05 14:22:36.199183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.810 [2024-12-05 14:22:36.199208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.810 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.810 [2024-12-05 14:22:36.207968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.208030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.217773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.217798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.225778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.225813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.236104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.236129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.245351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.245376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.261216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.261242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.277265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.277291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.288624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.288650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.304776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.304801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.315770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.315795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.331768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.331794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.342547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.342573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.358813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.358836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.369478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.369504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.384429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.384456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.394743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.394769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.410903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.410929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.421329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.421354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.436932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.436957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.811 [2024-12-05 14:22:36.448015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.811 [2024-12-05 14:22:36.448040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.811 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.456787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.456825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.465987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.466012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.474317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.474344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.482699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.482725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.491139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.491164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.499760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.499785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.508245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.508274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.516952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.516976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.525727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.525752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.534451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.534478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.543642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.543668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.550955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.550979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.561932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.561957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.577768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.577796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.593637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.593662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.071 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.071 [2024-12-05 14:22:36.605511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.071 [2024-12-05 14:22:36.605536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.072 [2024-12-05 14:22:36.620706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.072 [2024-12-05 14:22:36.620732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.072 [2024-12-05 14:22:36.636694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.072 [2024-12-05 14:22:36.636719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.072 [2024-12-05 14:22:36.652867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.072 [2024-12-05 14:22:36.652892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.072 [2024-12-05 14:22:36.669205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.072 [2024-12-05 14:22:36.669230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.072 [2024-12-05 14:22:36.685346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.072 [2024-12-05 14:22:36.685372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.072 [2024-12-05 14:22:36.700926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.072 [2024-12-05 14:22:36.700950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.072 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.717641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.717667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.731752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.731778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.747366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.747392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.763628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.763654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.780665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.780692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.796697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.796734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.812666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.812711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.824816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.824862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.331 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.331 [2024-12-05 14:22:36.835026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.331 [2024-12-05 14:22:36.835050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.851076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.851102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.867271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.867296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.883939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.883964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.900160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.900186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.916500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.916525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.932568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.932594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.945136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.945161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 00:16:31.332 Latency(us) 00:16:31.332 [2024-12-05T14:22:36.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.332 [2024-12-05T14:22:36.980Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:31.332 Nvme1n1 : 5.01 14613.92 114.17 0.00 0.00 8748.41 2278.87 14417.92 00:16:31.332 [2024-12-05T14:22:36.980Z] =================================================================================================================== 00:16:31.332 [2024-12-05T14:22:36.980Z] Total : 14613.92 114.17 0.00 0.00 8748.41 2278.87 14417.92 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.956550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.956568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.332 [2024-12-05 14:22:36.968515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.332 [2024-12-05 14:22:36.968536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.332 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.591 [2024-12-05 14:22:36.980554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.591 [2024-12-05 14:22:36.980589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.591 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.591 [2024-12-05 14:22:36.992525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.591 [2024-12-05 14:22:36.992544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.591 2024/12/05 14:22:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.591 [2024-12-05 14:22:37.004525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.591 [2024-12-05 14:22:37.004544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.591 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.016530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.016549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.028532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.028550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.040536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.040554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.052541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.052560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.064542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.064560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.076544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.076562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.088547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.088565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.100550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.100568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.112553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.112571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.124557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.124574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 [2024-12-05 14:22:37.136561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.592 [2024-12-05 14:22:37.136579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.592 2024/12/05 14:22:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.592 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86399) - No such process 00:16:31.592 14:22:37 -- target/zcopy.sh@49 -- # wait 86399 00:16:31.592 14:22:37 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.592 14:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.592 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:16:31.592 14:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.592 14:22:37 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:31.592 14:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.592 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:16:31.592 delay0 00:16:31.592 14:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.592 14:22:37 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:31.592 14:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.592 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:16:31.592 14:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.592 14:22:37 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:31.852 [2024-12-05 14:22:37.336391] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:38.419 Initializing NVMe Controllers 00:16:38.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:38.419 Initialization complete. Launching workers. 00:16:38.419 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 136 00:16:38.419 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 423, failed to submit 33 00:16:38.419 success 257, unsuccess 166, failed 0 00:16:38.419 14:22:43 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:38.419 14:22:43 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:38.419 14:22:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:38.419 14:22:43 -- nvmf/common.sh@116 -- # sync 00:16:38.419 14:22:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:38.419 14:22:43 -- nvmf/common.sh@119 -- # set +e 00:16:38.419 14:22:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:38.419 14:22:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:38.419 rmmod nvme_tcp 00:16:38.419 rmmod nvme_fabrics 00:16:38.419 rmmod nvme_keyring 00:16:38.419 14:22:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:38.419 14:22:43 -- nvmf/common.sh@123 -- # set -e 00:16:38.419 14:22:43 -- nvmf/common.sh@124 -- # return 0 00:16:38.419 14:22:43 -- nvmf/common.sh@477 -- # '[' -n 86235 ']' 00:16:38.419 14:22:43 -- nvmf/common.sh@478 -- # killprocess 86235 00:16:38.419 14:22:43 -- common/autotest_common.sh@936 -- # '[' -z 86235 ']' 00:16:38.419 14:22:43 -- common/autotest_common.sh@940 -- # kill -0 86235 00:16:38.419 14:22:43 -- common/autotest_common.sh@941 -- # uname 00:16:38.419 14:22:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.419 14:22:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86235 00:16:38.419 14:22:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:38.419 killing process with pid 86235 00:16:38.419 14:22:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:38.419 14:22:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86235' 00:16:38.419 14:22:43 -- common/autotest_common.sh@955 -- # kill 86235 00:16:38.419 14:22:43 -- common/autotest_common.sh@960 -- # wait 86235 00:16:38.419 14:22:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:38.419 14:22:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:38.419 14:22:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:38.419 14:22:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.419 14:22:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:38.419 14:22:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.419 14:22:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.419 14:22:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.419 14:22:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:38.419 ************************************ 00:16:38.419 END TEST nvmf_zcopy 00:16:38.419 ************************************ 00:16:38.419 00:16:38.419 real 0m24.651s 00:16:38.419 user 0m38.572s 00:16:38.419 sys 0m7.440s 00:16:38.419 14:22:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:38.419 14:22:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.419 14:22:43 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:38.419 14:22:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.419 14:22:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.419 14:22:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.419 ************************************ 00:16:38.419 START TEST nvmf_nmic 00:16:38.419 ************************************ 00:16:38.419 14:22:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:38.419 * Looking for test storage... 00:16:38.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:38.419 14:22:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:38.419 14:22:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:38.420 14:22:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:38.678 14:22:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:38.678 14:22:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:38.678 14:22:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:38.678 14:22:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:38.678 14:22:44 -- scripts/common.sh@335 -- # IFS=.-: 00:16:38.678 14:22:44 -- scripts/common.sh@335 -- # read -ra ver1 00:16:38.678 14:22:44 -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.678 14:22:44 -- scripts/common.sh@336 -- # read -ra ver2 00:16:38.678 14:22:44 -- scripts/common.sh@337 -- # local 'op=<' 00:16:38.678 14:22:44 -- scripts/common.sh@339 -- # ver1_l=2 00:16:38.679 14:22:44 -- scripts/common.sh@340 -- # ver2_l=1 00:16:38.679 14:22:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:38.679 14:22:44 -- scripts/common.sh@343 -- # case "$op" in 00:16:38.679 14:22:44 -- scripts/common.sh@344 -- # : 1 00:16:38.679 14:22:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:38.679 14:22:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.679 14:22:44 -- scripts/common.sh@364 -- # decimal 1 00:16:38.679 14:22:44 -- scripts/common.sh@352 -- # local d=1 00:16:38.679 14:22:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.679 14:22:44 -- scripts/common.sh@354 -- # echo 1 00:16:38.679 14:22:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:38.679 14:22:44 -- scripts/common.sh@365 -- # decimal 2 00:16:38.679 14:22:44 -- scripts/common.sh@352 -- # local d=2 00:16:38.679 14:22:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.679 14:22:44 -- scripts/common.sh@354 -- # echo 2 00:16:38.679 14:22:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:38.679 14:22:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:38.679 14:22:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:38.679 14:22:44 -- scripts/common.sh@367 -- # return 0 00:16:38.679 14:22:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.679 14:22:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:38.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.679 --rc genhtml_branch_coverage=1 00:16:38.679 --rc genhtml_function_coverage=1 00:16:38.679 --rc genhtml_legend=1 00:16:38.679 --rc geninfo_all_blocks=1 00:16:38.679 --rc geninfo_unexecuted_blocks=1 00:16:38.679 00:16:38.679 ' 00:16:38.679 14:22:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:38.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.679 --rc genhtml_branch_coverage=1 00:16:38.679 --rc genhtml_function_coverage=1 00:16:38.679 --rc genhtml_legend=1 00:16:38.679 --rc geninfo_all_blocks=1 00:16:38.679 --rc geninfo_unexecuted_blocks=1 00:16:38.679 00:16:38.679 ' 00:16:38.679 14:22:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:38.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.679 --rc genhtml_branch_coverage=1 00:16:38.679 --rc genhtml_function_coverage=1 00:16:38.679 --rc genhtml_legend=1 00:16:38.679 --rc geninfo_all_blocks=1 00:16:38.679 --rc geninfo_unexecuted_blocks=1 00:16:38.679 00:16:38.679 ' 00:16:38.679 14:22:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:38.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.679 --rc genhtml_branch_coverage=1 00:16:38.679 --rc genhtml_function_coverage=1 00:16:38.679 --rc genhtml_legend=1 00:16:38.679 --rc geninfo_all_blocks=1 00:16:38.679 --rc geninfo_unexecuted_blocks=1 00:16:38.679 00:16:38.679 ' 00:16:38.679 14:22:44 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.679 14:22:44 -- nvmf/common.sh@7 -- # uname -s 00:16:38.679 14:22:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.679 14:22:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.679 14:22:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.679 14:22:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.679 14:22:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.679 14:22:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.679 14:22:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.679 14:22:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.679 14:22:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.679 14:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.679 14:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:16:38.679 14:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:16:38.679 14:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.679 14:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.679 14:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.679 14:22:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.679 14:22:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.679 14:22:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.679 14:22:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.679 14:22:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.679 14:22:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.679 14:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.679 14:22:44 -- paths/export.sh@5 -- # export PATH 00:16:38.679 14:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.679 14:22:44 -- nvmf/common.sh@46 -- # : 0 00:16:38.679 14:22:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:38.679 14:22:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:38.679 14:22:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:38.679 14:22:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.679 14:22:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.679 14:22:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:38.679 14:22:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:38.679 14:22:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:38.679 14:22:44 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.679 14:22:44 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.679 14:22:44 -- target/nmic.sh@14 -- # nvmftestinit 00:16:38.679 14:22:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:38.679 14:22:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.679 14:22:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:38.679 14:22:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:38.679 14:22:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:38.679 14:22:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.679 14:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.679 14:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.679 14:22:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:38.679 14:22:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:38.679 14:22:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:38.679 14:22:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:38.679 14:22:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:38.679 14:22:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:38.679 14:22:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.679 14:22:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.679 14:22:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.679 14:22:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:38.679 14:22:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.679 14:22:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.679 14:22:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.679 14:22:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.679 14:22:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.679 14:22:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.679 14:22:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.679 14:22:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.679 14:22:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:38.679 14:22:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:38.679 Cannot find device "nvmf_tgt_br" 00:16:38.679 14:22:44 -- nvmf/common.sh@154 -- # true 00:16:38.679 14:22:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.679 Cannot find device "nvmf_tgt_br2" 00:16:38.679 14:22:44 -- nvmf/common.sh@155 -- # true 00:16:38.679 14:22:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:38.679 14:22:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:38.679 Cannot find device "nvmf_tgt_br" 00:16:38.679 14:22:44 -- nvmf/common.sh@157 -- # true 00:16:38.679 14:22:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:38.679 Cannot find device "nvmf_tgt_br2" 00:16:38.679 14:22:44 -- nvmf/common.sh@158 -- # true 00:16:38.679 14:22:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:38.679 14:22:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:38.679 14:22:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.679 14:22:44 -- nvmf/common.sh@161 -- # true 00:16:38.679 14:22:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.679 14:22:44 -- nvmf/common.sh@162 -- # true 00:16:38.679 14:22:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.679 14:22:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.679 14:22:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.680 14:22:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.680 14:22:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.680 14:22:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.938 14:22:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.938 14:22:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.938 14:22:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.938 14:22:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:38.938 14:22:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:38.938 14:22:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:38.938 14:22:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:38.938 14:22:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.938 14:22:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.938 14:22:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.938 14:22:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:38.938 14:22:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:38.938 14:22:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.938 14:22:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.938 14:22:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.938 14:22:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.938 14:22:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.938 14:22:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:38.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:38.938 00:16:38.938 --- 10.0.0.2 ping statistics --- 00:16:38.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.938 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:38.938 14:22:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:38.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:38.938 00:16:38.938 --- 10.0.0.3 ping statistics --- 00:16:38.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.938 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:38.938 14:22:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:38.938 00:16:38.938 --- 10.0.0.1 ping statistics --- 00:16:38.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.938 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:38.938 14:22:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.938 14:22:44 -- nvmf/common.sh@421 -- # return 0 00:16:38.938 14:22:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:38.938 14:22:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.938 14:22:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:38.938 14:22:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:38.938 14:22:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.938 14:22:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:38.938 14:22:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:38.938 14:22:44 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:38.938 14:22:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:38.938 14:22:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.938 14:22:44 -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 14:22:44 -- nvmf/common.sh@469 -- # nvmfpid=86726 00:16:38.938 14:22:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.938 14:22:44 -- nvmf/common.sh@470 -- # waitforlisten 86726 00:16:38.938 14:22:44 -- common/autotest_common.sh@829 -- # '[' -z 86726 ']' 00:16:38.938 14:22:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.938 14:22:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.938 14:22:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.938 14:22:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.938 14:22:44 -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 [2024-12-05 14:22:44.553473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.938 [2024-12-05 14:22:44.553560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.197 [2024-12-05 14:22:44.696706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.197 [2024-12-05 14:22:44.785493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.197 [2024-12-05 14:22:44.785668] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.197 [2024-12-05 14:22:44.785685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.197 [2024-12-05 14:22:44.785697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.197 [2024-12-05 14:22:44.785874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.197 [2024-12-05 14:22:44.785956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.197 [2024-12-05 14:22:44.786574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.197 [2024-12-05 14:22:44.786608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.131 14:22:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.131 14:22:45 -- common/autotest_common.sh@862 -- # return 0 00:16:40.131 14:22:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.131 14:22:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.131 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.131 14:22:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.131 14:22:45 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.131 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.131 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.131 [2024-12-05 14:22:45.662039] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.131 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.131 14:22:45 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.131 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.131 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.131 Malloc0 00:16:40.131 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.131 14:22:45 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.131 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.131 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.131 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.131 14:22:45 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.131 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.131 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.131 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.131 14:22:45 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.131 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.131 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.131 [2024-12-05 14:22:45.740113] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.131 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.132 14:22:45 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:40.132 test case1: single bdev can't be used in multiple subsystems 00:16:40.132 14:22:45 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:40.132 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.132 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.132 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.132 14:22:45 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:40.132 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.132 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.132 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.132 14:22:45 -- target/nmic.sh@28 -- # nmic_status=0 00:16:40.132 14:22:45 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:40.132 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.132 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.132 [2024-12-05 14:22:45.763858] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:40.132 [2024-12-05 14:22:45.763898] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:40.132 [2024-12-05 14:22:45.763914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.132 2024/12/05 14:22:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:40.132 request: 00:16:40.132 { 00:16:40.132 "method": "nvmf_subsystem_add_ns", 00:16:40.132 "params": { 00:16:40.132 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:40.132 "namespace": { 00:16:40.132 "bdev_name": "Malloc0" 00:16:40.132 } 00:16:40.132 } 00:16:40.132 } 00:16:40.132 Got JSON-RPC error response 00:16:40.132 GoRPCClient: error on JSON-RPC call 00:16:40.132 14:22:45 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:40.132 14:22:45 -- target/nmic.sh@29 -- # nmic_status=1 00:16:40.132 14:22:45 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:40.132 Adding namespace failed - expected result. 00:16:40.132 14:22:45 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:40.132 test case2: host connect to nvmf target in multiple paths 00:16:40.132 14:22:45 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:40.132 14:22:45 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:40.132 14:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.132 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.132 [2024-12-05 14:22:45.776049] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:40.389 14:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.389 14:22:45 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.389 14:22:45 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:40.647 14:22:46 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.647 14:22:46 -- common/autotest_common.sh@1187 -- # local i=0 00:16:40.647 14:22:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.647 14:22:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:40.647 14:22:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:42.547 14:22:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:42.547 14:22:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:42.547 14:22:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.547 14:22:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:42.547 14:22:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.547 14:22:48 -- common/autotest_common.sh@1197 -- # return 0 00:16:42.547 14:22:48 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:42.547 [global] 00:16:42.547 thread=1 00:16:42.547 invalidate=1 00:16:42.547 rw=write 00:16:42.547 time_based=1 00:16:42.547 runtime=1 00:16:42.547 ioengine=libaio 00:16:42.547 direct=1 00:16:42.547 bs=4096 00:16:42.547 iodepth=1 00:16:42.547 norandommap=0 00:16:42.547 numjobs=1 00:16:42.547 00:16:42.547 verify_dump=1 00:16:42.547 verify_backlog=512 00:16:42.547 verify_state_save=0 00:16:42.547 do_verify=1 00:16:42.547 verify=crc32c-intel 00:16:42.547 [job0] 00:16:42.547 filename=/dev/nvme0n1 00:16:42.805 Could not set queue depth (nvme0n1) 00:16:42.805 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.805 fio-3.35 00:16:42.805 Starting 1 thread 00:16:44.182 00:16:44.182 job0: (groupid=0, jobs=1): err= 0: pid=86841: Thu Dec 5 14:22:49 2024 00:16:44.182 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:44.182 slat (nsec): min=13033, max=62939, avg=15820.32, stdev=4621.18 00:16:44.182 clat (usec): min=117, max=485, avg=149.59, stdev=18.33 00:16:44.182 lat (usec): min=131, max=500, avg=165.41, stdev=18.94 00:16:44.182 clat percentiles (usec): 00:16:44.182 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 137], 00:16:44.182 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:16:44.182 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 184], 00:16:44.182 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 233], 99.95th=[ 330], 00:16:44.182 | 99.99th=[ 486] 00:16:44.182 write: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec); 0 zone resets 00:16:44.182 slat (nsec): min=19052, max=92967, avg=23639.22, stdev=6510.45 00:16:44.182 clat (usec): min=80, max=24931, avg=112.57, stdev=417.96 00:16:44.182 lat (usec): min=101, max=24952, avg=136.21, stdev=417.99 00:16:44.182 clat percentiles (usec): 00:16:44.182 | 1.00th=[ 87], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 95], 00:16:44.182 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 106], 00:16:44.182 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 126], 95.00th=[ 133], 00:16:44.182 | 99.00th=[ 145], 99.50th=[ 155], 99.90th=[ 172], 99.95th=[ 198], 00:16:44.182 | 99.99th=[25035] 00:16:44.182 bw ( KiB/s): min=14472, max=14472, per=100.00%, avg=14472.00, stdev= 0.00, samples=1 00:16:44.182 iops : min= 3618, max= 3618, avg=3618.00, stdev= 0.00, samples=1 00:16:44.182 lat (usec) : 100=22.56%, 250=77.38%, 500=0.05% 00:16:44.182 lat (msec) : 50=0.02% 00:16:44.182 cpu : usr=2.60%, sys=9.20%, ctx=6604, majf=0, minf=5 00:16:44.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.182 issued rwts: total=3072,3532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.182 00:16:44.182 Run status group 0 (all jobs): 00:16:44.182 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:44.182 WRITE: bw=13.8MiB/s (14.5MB/s), 13.8MiB/s-13.8MiB/s (14.5MB/s-14.5MB/s), io=13.8MiB (14.5MB), run=1001-1001msec 00:16:44.182 00:16:44.182 Disk stats (read/write): 00:16:44.182 nvme0n1: ios=3022/3072, merge=0/0, ticks=507/382, in_queue=889, util=91.38% 00:16:44.182 14:22:49 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:44.182 14:22:49 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.182 14:22:49 -- common/autotest_common.sh@1208 -- # local i=0 00:16:44.182 14:22:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:44.182 14:22:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.182 14:22:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:44.182 14:22:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.182 14:22:49 -- common/autotest_common.sh@1220 -- # return 0 00:16:44.182 14:22:49 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:44.182 14:22:49 -- target/nmic.sh@53 -- # nvmftestfini 00:16:44.182 14:22:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:44.182 14:22:49 -- nvmf/common.sh@116 -- # sync 00:16:44.182 14:22:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:44.182 14:22:49 -- nvmf/common.sh@119 -- # set +e 00:16:44.182 14:22:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:44.182 14:22:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:44.182 rmmod nvme_tcp 00:16:44.182 rmmod nvme_fabrics 00:16:44.182 rmmod nvme_keyring 00:16:44.182 14:22:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.182 14:22:49 -- nvmf/common.sh@123 -- # set -e 00:16:44.182 14:22:49 -- nvmf/common.sh@124 -- # return 0 00:16:44.182 14:22:49 -- nvmf/common.sh@477 -- # '[' -n 86726 ']' 00:16:44.182 14:22:49 -- nvmf/common.sh@478 -- # killprocess 86726 00:16:44.182 14:22:49 -- common/autotest_common.sh@936 -- # '[' -z 86726 ']' 00:16:44.183 14:22:49 -- common/autotest_common.sh@940 -- # kill -0 86726 00:16:44.183 14:22:49 -- common/autotest_common.sh@941 -- # uname 00:16:44.183 14:22:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.183 14:22:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86726 00:16:44.183 14:22:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:44.183 14:22:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:44.183 killing process with pid 86726 00:16:44.183 14:22:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86726' 00:16:44.183 14:22:49 -- common/autotest_common.sh@955 -- # kill 86726 00:16:44.183 14:22:49 -- common/autotest_common.sh@960 -- # wait 86726 00:16:44.752 14:22:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.752 14:22:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:44.752 14:22:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:44.752 14:22:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.752 14:22:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:44.752 14:22:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.752 14:22:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.752 14:22:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.752 14:22:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:44.752 00:16:44.752 real 0m6.254s 00:16:44.752 user 0m21.114s 00:16:44.752 sys 0m1.303s 00:16:44.752 14:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:44.752 ************************************ 00:16:44.752 END TEST nvmf_nmic 00:16:44.752 ************************************ 00:16:44.752 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.752 14:22:50 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:44.752 14:22:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:44.752 14:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.752 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.752 ************************************ 00:16:44.752 START TEST nvmf_fio_target 00:16:44.752 ************************************ 00:16:44.752 14:22:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:44.752 * Looking for test storage... 00:16:44.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:44.752 14:22:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:44.752 14:22:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:44.752 14:22:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:44.752 14:22:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:44.752 14:22:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:44.752 14:22:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:44.752 14:22:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:44.752 14:22:50 -- scripts/common.sh@335 -- # IFS=.-: 00:16:44.752 14:22:50 -- scripts/common.sh@335 -- # read -ra ver1 00:16:44.752 14:22:50 -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.752 14:22:50 -- scripts/common.sh@336 -- # read -ra ver2 00:16:44.752 14:22:50 -- scripts/common.sh@337 -- # local 'op=<' 00:16:44.752 14:22:50 -- scripts/common.sh@339 -- # ver1_l=2 00:16:44.752 14:22:50 -- scripts/common.sh@340 -- # ver2_l=1 00:16:44.752 14:22:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:44.752 14:22:50 -- scripts/common.sh@343 -- # case "$op" in 00:16:44.752 14:22:50 -- scripts/common.sh@344 -- # : 1 00:16:44.752 14:22:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:44.752 14:22:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.752 14:22:50 -- scripts/common.sh@364 -- # decimal 1 00:16:44.752 14:22:50 -- scripts/common.sh@352 -- # local d=1 00:16:44.752 14:22:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.752 14:22:50 -- scripts/common.sh@354 -- # echo 1 00:16:44.752 14:22:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:44.752 14:22:50 -- scripts/common.sh@365 -- # decimal 2 00:16:44.752 14:22:50 -- scripts/common.sh@352 -- # local d=2 00:16:44.752 14:22:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.752 14:22:50 -- scripts/common.sh@354 -- # echo 2 00:16:44.752 14:22:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:44.752 14:22:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:44.752 14:22:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:44.752 14:22:50 -- scripts/common.sh@367 -- # return 0 00:16:44.752 14:22:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.752 14:22:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:44.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.752 --rc genhtml_branch_coverage=1 00:16:44.752 --rc genhtml_function_coverage=1 00:16:44.752 --rc genhtml_legend=1 00:16:44.752 --rc geninfo_all_blocks=1 00:16:44.752 --rc geninfo_unexecuted_blocks=1 00:16:44.752 00:16:44.752 ' 00:16:44.752 14:22:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:44.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.752 --rc genhtml_branch_coverage=1 00:16:44.752 --rc genhtml_function_coverage=1 00:16:44.752 --rc genhtml_legend=1 00:16:44.752 --rc geninfo_all_blocks=1 00:16:44.752 --rc geninfo_unexecuted_blocks=1 00:16:44.752 00:16:44.752 ' 00:16:44.752 14:22:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:44.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.752 --rc genhtml_branch_coverage=1 00:16:44.752 --rc genhtml_function_coverage=1 00:16:44.752 --rc genhtml_legend=1 00:16:44.752 --rc geninfo_all_blocks=1 00:16:44.752 --rc geninfo_unexecuted_blocks=1 00:16:44.752 00:16:44.752 ' 00:16:44.752 14:22:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:44.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.752 --rc genhtml_branch_coverage=1 00:16:44.752 --rc genhtml_function_coverage=1 00:16:44.752 --rc genhtml_legend=1 00:16:44.752 --rc geninfo_all_blocks=1 00:16:44.752 --rc geninfo_unexecuted_blocks=1 00:16:44.752 00:16:44.752 ' 00:16:44.752 14:22:50 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.752 14:22:50 -- nvmf/common.sh@7 -- # uname -s 00:16:44.752 14:22:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.752 14:22:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.752 14:22:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.752 14:22:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.752 14:22:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.752 14:22:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.752 14:22:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.752 14:22:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.752 14:22:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.752 14:22:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.012 14:22:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:16:45.012 14:22:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:16:45.012 14:22:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.012 14:22:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.012 14:22:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.012 14:22:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.012 14:22:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.012 14:22:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.012 14:22:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.012 14:22:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.012 14:22:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.012 14:22:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.012 14:22:50 -- paths/export.sh@5 -- # export PATH 00:16:45.012 14:22:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.012 14:22:50 -- nvmf/common.sh@46 -- # : 0 00:16:45.012 14:22:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:45.012 14:22:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:45.012 14:22:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:45.012 14:22:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.013 14:22:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.013 14:22:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:45.013 14:22:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:45.013 14:22:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:45.013 14:22:50 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.013 14:22:50 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.013 14:22:50 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.013 14:22:50 -- target/fio.sh@16 -- # nvmftestinit 00:16:45.013 14:22:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:45.013 14:22:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.013 14:22:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:45.013 14:22:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:45.013 14:22:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:45.013 14:22:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.013 14:22:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.013 14:22:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.013 14:22:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:45.013 14:22:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:45.013 14:22:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:45.013 14:22:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:45.013 14:22:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:45.013 14:22:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:45.013 14:22:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.013 14:22:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.013 14:22:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:45.013 14:22:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:45.013 14:22:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.013 14:22:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.013 14:22:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.013 14:22:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.013 14:22:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.013 14:22:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.013 14:22:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.013 14:22:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.013 14:22:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:45.013 14:22:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:45.013 Cannot find device "nvmf_tgt_br" 00:16:45.013 14:22:50 -- nvmf/common.sh@154 -- # true 00:16:45.013 14:22:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.013 Cannot find device "nvmf_tgt_br2" 00:16:45.013 14:22:50 -- nvmf/common.sh@155 -- # true 00:16:45.013 14:22:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:45.013 14:22:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:45.013 Cannot find device "nvmf_tgt_br" 00:16:45.013 14:22:50 -- nvmf/common.sh@157 -- # true 00:16:45.013 14:22:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:45.013 Cannot find device "nvmf_tgt_br2" 00:16:45.013 14:22:50 -- nvmf/common.sh@158 -- # true 00:16:45.013 14:22:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:45.013 14:22:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:45.013 14:22:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.013 14:22:50 -- nvmf/common.sh@161 -- # true 00:16:45.013 14:22:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.013 14:22:50 -- nvmf/common.sh@162 -- # true 00:16:45.013 14:22:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.013 14:22:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.013 14:22:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.013 14:22:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.013 14:22:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.013 14:22:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.013 14:22:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.013 14:22:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:45.013 14:22:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:45.013 14:22:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:45.013 14:22:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:45.013 14:22:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:45.272 14:22:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:45.272 14:22:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.272 14:22:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.272 14:22:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.272 14:22:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:45.272 14:22:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:45.272 14:22:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.272 14:22:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.272 14:22:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.272 14:22:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.272 14:22:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.272 14:22:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:45.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:45.272 00:16:45.272 --- 10.0.0.2 ping statistics --- 00:16:45.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.272 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:45.272 14:22:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:45.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:16:45.272 00:16:45.272 --- 10.0.0.3 ping statistics --- 00:16:45.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.272 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:45.272 14:22:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:45.272 00:16:45.272 --- 10.0.0.1 ping statistics --- 00:16:45.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.272 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:45.272 14:22:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.272 14:22:50 -- nvmf/common.sh@421 -- # return 0 00:16:45.272 14:22:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:45.272 14:22:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.272 14:22:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:45.272 14:22:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:45.272 14:22:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.272 14:22:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:45.272 14:22:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:45.272 14:22:50 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:45.272 14:22:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:45.272 14:22:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.272 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.272 14:22:50 -- nvmf/common.sh@469 -- # nvmfpid=87027 00:16:45.272 14:22:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.272 14:22:50 -- nvmf/common.sh@470 -- # waitforlisten 87027 00:16:45.272 14:22:50 -- common/autotest_common.sh@829 -- # '[' -z 87027 ']' 00:16:45.272 14:22:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.272 14:22:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.272 14:22:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.272 14:22:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.272 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.272 [2024-12-05 14:22:50.838225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:45.272 [2024-12-05 14:22:50.838309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.532 [2024-12-05 14:22:50.975331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.532 [2024-12-05 14:22:51.049799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.532 [2024-12-05 14:22:51.049954] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.532 [2024-12-05 14:22:51.049966] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.532 [2024-12-05 14:22:51.049974] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.532 [2024-12-05 14:22:51.050146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.532 [2024-12-05 14:22:51.050294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.532 [2024-12-05 14:22:51.050641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.532 [2024-12-05 14:22:51.050668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.468 14:22:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.468 14:22:51 -- common/autotest_common.sh@862 -- # return 0 00:16:46.468 14:22:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:46.468 14:22:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.468 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.468 14:22:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.468 14:22:51 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:46.468 [2024-12-05 14:22:52.102738] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.727 14:22:52 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.986 14:22:52 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:46.986 14:22:52 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.245 14:22:52 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:47.245 14:22:52 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.505 14:22:53 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:47.505 14:22:53 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.764 14:22:53 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:47.764 14:22:53 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:48.023 14:22:53 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.282 14:22:53 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:48.282 14:22:53 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.590 14:22:53 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:48.590 14:22:53 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.874 14:22:54 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:48.874 14:22:54 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:48.874 14:22:54 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.132 14:22:54 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:49.132 14:22:54 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.391 14:22:54 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:49.391 14:22:54 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.650 14:22:55 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.907 [2024-12-05 14:22:55.330283] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.907 14:22:55 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:49.907 14:22:55 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:50.473 14:22:55 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.473 14:22:56 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:50.473 14:22:56 -- common/autotest_common.sh@1187 -- # local i=0 00:16:50.473 14:22:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.473 14:22:56 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:50.473 14:22:56 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:50.473 14:22:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:52.381 14:22:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:52.381 14:22:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:52.381 14:22:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.640 14:22:58 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:52.640 14:22:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.640 14:22:58 -- common/autotest_common.sh@1197 -- # return 0 00:16:52.640 14:22:58 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:52.640 [global] 00:16:52.640 thread=1 00:16:52.640 invalidate=1 00:16:52.640 rw=write 00:16:52.640 time_based=1 00:16:52.640 runtime=1 00:16:52.640 ioengine=libaio 00:16:52.640 direct=1 00:16:52.640 bs=4096 00:16:52.640 iodepth=1 00:16:52.640 norandommap=0 00:16:52.640 numjobs=1 00:16:52.640 00:16:52.640 verify_dump=1 00:16:52.640 verify_backlog=512 00:16:52.640 verify_state_save=0 00:16:52.640 do_verify=1 00:16:52.640 verify=crc32c-intel 00:16:52.640 [job0] 00:16:52.640 filename=/dev/nvme0n1 00:16:52.640 [job1] 00:16:52.640 filename=/dev/nvme0n2 00:16:52.640 [job2] 00:16:52.640 filename=/dev/nvme0n3 00:16:52.640 [job3] 00:16:52.640 filename=/dev/nvme0n4 00:16:52.640 Could not set queue depth (nvme0n1) 00:16:52.640 Could not set queue depth (nvme0n2) 00:16:52.640 Could not set queue depth (nvme0n3) 00:16:52.640 Could not set queue depth (nvme0n4) 00:16:52.640 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.640 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.640 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.640 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.640 fio-3.35 00:16:52.640 Starting 4 threads 00:16:54.022 00:16:54.022 job0: (groupid=0, jobs=1): err= 0: pid=87315: Thu Dec 5 14:22:59 2024 00:16:54.022 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:54.022 slat (nsec): min=17349, max=75170, avg=22390.99, stdev=5432.11 00:16:54.022 clat (usec): min=156, max=589, avg=335.17, stdev=72.66 00:16:54.022 lat (usec): min=177, max=612, avg=357.56, stdev=74.69 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 233], 20.00th=[ 255], 00:16:54.022 | 30.00th=[ 281], 40.00th=[ 322], 50.00th=[ 351], 60.00th=[ 375], 00:16:54.022 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 420], 95.00th=[ 433], 00:16:54.022 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 515], 99.95th=[ 586], 00:16:54.022 | 99.99th=[ 586] 00:16:54.022 write: IOPS=1684, BW=6737KiB/s (6899kB/s)(6744KiB/1001msec); 0 zone resets 00:16:54.022 slat (usec): min=26, max=192, avg=32.99, stdev= 8.32 00:16:54.022 clat (usec): min=101, max=804, avg=229.71, stdev=40.18 00:16:54.022 lat (usec): min=134, max=859, avg=262.70, stdev=43.30 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 149], 5.00th=[ 165], 10.00th=[ 188], 20.00th=[ 202], 00:16:54.022 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:16:54.022 | 70.00th=[ 245], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 293], 00:16:54.022 | 99.00th=[ 322], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 807], 00:16:54.022 | 99.99th=[ 807] 00:16:54.022 bw ( KiB/s): min= 8175, max= 8175, per=32.50%, avg=8175.00, stdev= 0.00, samples=1 00:16:54.022 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:54.022 lat (usec) : 250=46.37%, 500=53.51%, 750=0.09%, 1000=0.03% 00:16:54.022 cpu : usr=1.90%, sys=6.40%, ctx=3223, majf=0, minf=7 00:16:54.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 issued rwts: total=1536,1686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.022 job1: (groupid=0, jobs=1): err= 0: pid=87316: Thu Dec 5 14:22:59 2024 00:16:54.022 read: IOPS=1107, BW=4432KiB/s (4538kB/s)(4436KiB/1001msec) 00:16:54.022 slat (usec): min=8, max=473, avg=24.59, stdev=23.86 00:16:54.022 clat (usec): min=149, max=701, avg=469.05, stdev=48.97 00:16:54.022 lat (usec): min=174, max=865, avg=493.64, stdev=52.51 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 334], 5.00th=[ 383], 10.00th=[ 412], 20.00th=[ 437], 00:16:54.022 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:16:54.022 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 545], 00:16:54.022 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 701], 00:16:54.022 | 99.99th=[ 701] 00:16:54.022 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:54.022 slat (nsec): min=16561, max=87755, avg=25560.97, stdev=8402.39 00:16:54.022 clat (usec): min=119, max=2422, avg=265.46, stdev=65.16 00:16:54.022 lat (usec): min=188, max=2454, avg=291.02, stdev=66.15 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 235], 00:16:54.022 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:16:54.022 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 314], 00:16:54.022 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 529], 99.95th=[ 2409], 00:16:54.022 | 99.99th=[ 2409] 00:16:54.022 bw ( KiB/s): min= 6344, max= 6344, per=25.22%, avg=6344.00, stdev= 0.00, samples=1 00:16:54.022 iops : min= 1586, max= 1586, avg=1586.00, stdev= 0.00, samples=1 00:16:54.022 lat (usec) : 250=18.79%, 500=71.80%, 750=9.38% 00:16:54.022 lat (msec) : 4=0.04% 00:16:54.022 cpu : usr=1.50%, sys=4.70%, ctx=3041, majf=0, minf=11 00:16:54.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 issued rwts: total=1109,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.022 job2: (groupid=0, jobs=1): err= 0: pid=87317: Thu Dec 5 14:22:59 2024 00:16:54.022 read: IOPS=1105, BW=4424KiB/s (4530kB/s)(4428KiB/1001msec) 00:16:54.022 slat (usec): min=8, max=100, avg=20.80, stdev=14.57 00:16:54.022 clat (usec): min=300, max=840, avg=473.47, stdev=49.55 00:16:54.022 lat (usec): min=315, max=915, avg=494.27, stdev=51.36 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 355], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 441], 00:16:54.022 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:16:54.022 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 553], 00:16:54.022 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 758], 99.95th=[ 840], 00:16:54.022 | 99.99th=[ 840] 00:16:54.022 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:54.022 slat (usec): min=16, max=119, avg=28.96, stdev= 8.93 00:16:54.022 clat (usec): min=119, max=2361, avg=262.32, stdev=66.28 00:16:54.022 lat (usec): min=151, max=2393, avg=291.29, stdev=67.40 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 215], 20.00th=[ 229], 00:16:54.022 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:16:54.022 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 322], 00:16:54.022 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 586], 99.95th=[ 2376], 00:16:54.022 | 99.99th=[ 2376] 00:16:54.022 bw ( KiB/s): min= 6328, max= 6328, per=25.16%, avg=6328.00, stdev= 0.00, samples=1 00:16:54.022 iops : min= 1582, max= 1582, avg=1582.00, stdev= 0.00, samples=1 00:16:54.022 lat (usec) : 250=21.23%, 500=67.65%, 750=11.01%, 1000=0.08% 00:16:54.022 lat (msec) : 4=0.04% 00:16:54.022 cpu : usr=1.30%, sys=5.20%, ctx=2881, majf=0, minf=12 00:16:54.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 issued rwts: total=1107,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.022 job3: (groupid=0, jobs=1): err= 0: pid=87318: Thu Dec 5 14:22:59 2024 00:16:54.022 read: IOPS=1348, BW=5395KiB/s (5524kB/s)(5400KiB/1001msec) 00:16:54.022 slat (usec): min=8, max=539, avg=25.05, stdev=20.47 00:16:54.022 clat (usec): min=72, max=2422, avg=443.71, stdev=153.30 00:16:54.022 lat (usec): min=180, max=2441, avg=468.76, stdev=152.59 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 371], 00:16:54.022 | 30.00th=[ 416], 40.00th=[ 453], 50.00th=[ 490], 60.00th=[ 506], 00:16:54.022 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 594], 00:16:54.022 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 2245], 99.95th=[ 2409], 00:16:54.022 | 99.99th=[ 2409] 00:16:54.022 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:54.022 slat (usec): min=11, max=131, avg=25.11, stdev= 8.52 00:16:54.022 clat (usec): min=111, max=7216, avg=209.86, stdev=192.93 00:16:54.022 lat (usec): min=138, max=7241, avg=234.97, stdev=192.62 00:16:54.022 clat percentiles (usec): 00:16:54.022 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 141], 00:16:54.022 | 30.00th=[ 157], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 225], 00:16:54.022 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 297], 00:16:54.022 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 1778], 99.95th=[ 7242], 00:16:54.022 | 99.99th=[ 7242] 00:16:54.022 bw ( KiB/s): min= 8175, max= 8175, per=32.50%, avg=8175.00, stdev= 0.00, samples=1 00:16:54.022 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:54.022 lat (usec) : 100=0.03%, 250=53.19%, 500=26.06%, 750=20.51% 00:16:54.022 lat (msec) : 2=0.10%, 4=0.07%, 10=0.03% 00:16:54.022 cpu : usr=1.50%, sys=5.10%, ctx=3207, majf=0, minf=8 00:16:54.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.022 issued rwts: total=1350,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.022 00:16:54.022 Run status group 0 (all jobs): 00:16:54.022 READ: bw=19.9MiB/s (20.9MB/s), 4424KiB/s-6138KiB/s (4530kB/s-6285kB/s), io=19.9MiB (20.9MB), run=1001-1001msec 00:16:54.022 WRITE: bw=24.6MiB/s (25.8MB/s), 6138KiB/s-6737KiB/s (6285kB/s-6899kB/s), io=24.6MiB (25.8MB), run=1001-1001msec 00:16:54.022 00:16:54.022 Disk stats (read/write): 00:16:54.022 nvme0n1: ios=1331/1536, merge=0/0, ticks=454/377, in_queue=831, util=87.58% 00:16:54.022 nvme0n2: ios=1073/1178, merge=0/0, ticks=507/306, in_queue=813, util=88.54% 00:16:54.022 nvme0n3: ios=1024/1175, merge=0/0, ticks=467/319, in_queue=786, util=89.02% 00:16:54.022 nvme0n4: ios=1086/1536, merge=0/0, ticks=453/282, in_queue=735, util=88.94% 00:16:54.022 14:22:59 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:54.022 [global] 00:16:54.022 thread=1 00:16:54.022 invalidate=1 00:16:54.022 rw=randwrite 00:16:54.022 time_based=1 00:16:54.022 runtime=1 00:16:54.022 ioengine=libaio 00:16:54.022 direct=1 00:16:54.022 bs=4096 00:16:54.022 iodepth=1 00:16:54.022 norandommap=0 00:16:54.022 numjobs=1 00:16:54.022 00:16:54.022 verify_dump=1 00:16:54.022 verify_backlog=512 00:16:54.022 verify_state_save=0 00:16:54.022 do_verify=1 00:16:54.022 verify=crc32c-intel 00:16:54.022 [job0] 00:16:54.022 filename=/dev/nvme0n1 00:16:54.022 [job1] 00:16:54.022 filename=/dev/nvme0n2 00:16:54.023 [job2] 00:16:54.023 filename=/dev/nvme0n3 00:16:54.023 [job3] 00:16:54.023 filename=/dev/nvme0n4 00:16:54.023 Could not set queue depth (nvme0n1) 00:16:54.023 Could not set queue depth (nvme0n2) 00:16:54.023 Could not set queue depth (nvme0n3) 00:16:54.023 Could not set queue depth (nvme0n4) 00:16:54.023 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.023 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.023 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.023 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.023 fio-3.35 00:16:54.023 Starting 4 threads 00:16:55.403 00:16:55.403 job0: (groupid=0, jobs=1): err= 0: pid=87377: Thu Dec 5 14:23:00 2024 00:16:55.403 read: IOPS=2262, BW=9051KiB/s (9268kB/s)(9060KiB/1001msec) 00:16:55.403 slat (usec): min=13, max=108, avg=15.90, stdev= 6.09 00:16:55.403 clat (usec): min=135, max=325, avg=205.25, stdev=24.84 00:16:55.403 lat (usec): min=155, max=348, avg=221.15, stdev=25.36 00:16:55.403 clat percentiles (usec): 00:16:55.403 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 186], 00:16:55.403 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 210], 00:16:55.403 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 251], 00:16:55.403 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 318], 00:16:55.403 | 99.99th=[ 326] 00:16:55.403 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:55.403 slat (usec): min=19, max=108, avg=24.63, stdev= 8.30 00:16:55.403 clat (usec): min=99, max=393, avg=167.22, stdev=27.19 00:16:55.403 lat (usec): min=120, max=470, avg=191.84, stdev=30.06 00:16:55.403 clat percentiles (usec): 00:16:55.403 | 1.00th=[ 121], 5.00th=[ 131], 10.00th=[ 139], 20.00th=[ 147], 00:16:55.403 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:16:55.403 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 217], 00:16:55.403 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 330], 99.95th=[ 371], 00:16:55.403 | 99.99th=[ 396] 00:16:55.403 bw ( KiB/s): min=10944, max=10944, per=34.48%, avg=10944.00, stdev= 0.00, samples=1 00:16:55.403 iops : min= 2736, max= 2736, avg=2736.00, stdev= 0.00, samples=1 00:16:55.403 lat (usec) : 100=0.02%, 250=97.04%, 500=2.94% 00:16:55.403 cpu : usr=1.70%, sys=7.20%, ctx=4838, majf=0, minf=5 00:16:55.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.403 issued rwts: total=2265,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.403 job1: (groupid=0, jobs=1): err= 0: pid=87379: Thu Dec 5 14:23:00 2024 00:16:55.403 read: IOPS=1263, BW=5055KiB/s (5176kB/s)(5060KiB/1001msec) 00:16:55.403 slat (nsec): min=10757, max=62213, avg=15382.16, stdev=4776.23 00:16:55.403 clat (usec): min=184, max=734, avg=370.68, stdev=39.12 00:16:55.403 lat (usec): min=194, max=746, avg=386.06, stdev=39.50 00:16:55.403 clat percentiles (usec): 00:16:55.403 | 1.00th=[ 302], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:16:55.403 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:16:55.403 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 433], 00:16:55.403 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 652], 99.95th=[ 734], 00:16:55.403 | 99.99th=[ 734] 00:16:55.403 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:55.403 slat (usec): min=14, max=110, avg=25.87, stdev= 7.06 00:16:55.404 clat (usec): min=140, max=504, avg=303.95, stdev=37.14 00:16:55.404 lat (usec): min=164, max=525, avg=329.82, stdev=37.32 00:16:55.404 clat percentiles (usec): 00:16:55.404 | 1.00th=[ 221], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 273], 00:16:55.404 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:16:55.404 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 363], 00:16:55.404 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 465], 99.95th=[ 506], 00:16:55.404 | 99.99th=[ 506] 00:16:55.404 bw ( KiB/s): min= 7136, max= 7136, per=22.48%, avg=7136.00, stdev= 0.00, samples=1 00:16:55.404 iops : min= 1784, max= 1784, avg=1784.00, stdev= 0.00, samples=1 00:16:55.404 lat (usec) : 250=4.07%, 500=95.39%, 750=0.54% 00:16:55.404 cpu : usr=1.00%, sys=4.60%, ctx=2801, majf=0, minf=11 00:16:55.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.404 issued rwts: total=1265,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.404 job2: (groupid=0, jobs=1): err= 0: pid=87383: Thu Dec 5 14:23:00 2024 00:16:55.404 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:55.404 slat (nsec): min=12871, max=82091, avg=16166.27, stdev=4823.19 00:16:55.404 clat (usec): min=168, max=2683, avg=229.81, stdev=60.41 00:16:55.404 lat (usec): min=183, max=2699, avg=245.97, stdev=60.68 00:16:55.404 clat percentiles (usec): 00:16:55.404 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:16:55.404 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:16:55.404 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:16:55.404 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 363], 99.95th=[ 367], 00:16:55.404 | 99.99th=[ 2671] 00:16:55.404 write: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec); 0 zone resets 00:16:55.404 slat (nsec): min=19276, max=99589, avg=25379.23, stdev=7499.52 00:16:55.404 clat (usec): min=113, max=388, avg=186.17, stdev=29.05 00:16:55.404 lat (usec): min=134, max=417, avg=211.55, stdev=30.83 00:16:55.404 clat percentiles (usec): 00:16:55.404 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:16:55.404 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 192], 00:16:55.404 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 239], 00:16:55.404 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 363], 99.95th=[ 375], 00:16:55.404 | 99.99th=[ 388] 00:16:55.404 bw ( KiB/s): min= 8920, max= 8920, per=28.10%, avg=8920.00, stdev= 0.00, samples=1 00:16:55.404 iops : min= 2230, max= 2230, avg=2230.00, stdev= 0.00, samples=1 00:16:55.404 lat (usec) : 250=89.17%, 500=10.81% 00:16:55.404 lat (msec) : 4=0.02% 00:16:55.404 cpu : usr=1.50%, sys=6.70%, ctx=4360, majf=0, minf=14 00:16:55.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.404 issued rwts: total=2048,2311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.404 job3: (groupid=0, jobs=1): err= 0: pid=87384: Thu Dec 5 14:23:00 2024 00:16:55.404 read: IOPS=1263, BW=5055KiB/s (5176kB/s)(5060KiB/1001msec) 00:16:55.404 slat (nsec): min=10883, max=54664, avg=15736.82, stdev=4803.71 00:16:55.404 clat (usec): min=167, max=730, avg=370.30, stdev=39.38 00:16:55.404 lat (usec): min=185, max=744, avg=386.04, stdev=40.44 00:16:55.404 clat percentiles (usec): 00:16:55.404 | 1.00th=[ 297], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:16:55.404 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:16:55.404 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 429], 00:16:55.404 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 701], 99.95th=[ 734], 00:16:55.404 | 99.99th=[ 734] 00:16:55.404 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:55.404 slat (nsec): min=11024, max=72779, avg=25798.71, stdev=7074.75 00:16:55.404 clat (usec): min=164, max=520, avg=304.00, stdev=35.63 00:16:55.404 lat (usec): min=188, max=542, avg=329.80, stdev=35.75 00:16:55.404 clat percentiles (usec): 00:16:55.404 | 1.00th=[ 225], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 277], 00:16:55.404 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:16:55.404 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:16:55.404 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 523], 00:16:55.404 | 99.99th=[ 523] 00:16:55.404 bw ( KiB/s): min= 7142, max= 7142, per=22.50%, avg=7142.00, stdev= 0.00, samples=1 00:16:55.404 iops : min= 1785, max= 1785, avg=1785.00, stdev= 0.00, samples=1 00:16:55.404 lat (usec) : 250=3.68%, 500=95.93%, 750=0.39% 00:16:55.404 cpu : usr=1.00%, sys=4.40%, ctx=2801, majf=0, minf=15 00:16:55.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.404 issued rwts: total=1265,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.404 00:16:55.404 Run status group 0 (all jobs): 00:16:55.404 READ: bw=26.7MiB/s (28.0MB/s), 5055KiB/s-9051KiB/s (5176kB/s-9268kB/s), io=26.7MiB (28.0MB), run=1001-1001msec 00:16:55.404 WRITE: bw=31.0MiB/s (32.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:16:55.404 00:16:55.404 Disk stats (read/write): 00:16:55.404 nvme0n1: ios=2098/2121, merge=0/0, ticks=458/381, in_queue=839, util=89.18% 00:16:55.404 nvme0n2: ios=1072/1430, merge=0/0, ticks=414/452, in_queue=866, util=89.80% 00:16:55.404 nvme0n3: ios=1756/2048, merge=0/0, ticks=440/412, in_queue=852, util=90.16% 00:16:55.404 nvme0n4: ios=1030/1429, merge=0/0, ticks=386/446, in_queue=832, util=90.01% 00:16:55.404 14:23:00 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:55.404 [global] 00:16:55.404 thread=1 00:16:55.404 invalidate=1 00:16:55.404 rw=write 00:16:55.404 time_based=1 00:16:55.404 runtime=1 00:16:55.404 ioengine=libaio 00:16:55.404 direct=1 00:16:55.404 bs=4096 00:16:55.404 iodepth=128 00:16:55.404 norandommap=0 00:16:55.404 numjobs=1 00:16:55.404 00:16:55.404 verify_dump=1 00:16:55.404 verify_backlog=512 00:16:55.404 verify_state_save=0 00:16:55.404 do_verify=1 00:16:55.404 verify=crc32c-intel 00:16:55.404 [job0] 00:16:55.404 filename=/dev/nvme0n1 00:16:55.404 [job1] 00:16:55.404 filename=/dev/nvme0n2 00:16:55.404 [job2] 00:16:55.404 filename=/dev/nvme0n3 00:16:55.404 [job3] 00:16:55.404 filename=/dev/nvme0n4 00:16:55.404 Could not set queue depth (nvme0n1) 00:16:55.404 Could not set queue depth (nvme0n2) 00:16:55.404 Could not set queue depth (nvme0n3) 00:16:55.404 Could not set queue depth (nvme0n4) 00:16:55.404 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.404 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.404 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.404 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.404 fio-3.35 00:16:55.404 Starting 4 threads 00:16:56.783 00:16:56.783 job0: (groupid=0, jobs=1): err= 0: pid=87440: Thu Dec 5 14:23:02 2024 00:16:56.783 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:16:56.783 slat (usec): min=2, max=5082, avg=114.56, stdev=626.28 00:16:56.783 clat (usec): min=9659, max=20655, avg=14940.28, stdev=1588.92 00:16:56.783 lat (usec): min=9772, max=21558, avg=15054.84, stdev=1604.53 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[10945], 5.00th=[11600], 10.00th=[12518], 20.00th=[13960], 00:16:56.783 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:16:56.783 | 70.00th=[15664], 80.00th=[16188], 90.00th=[16581], 95.00th=[16909], 00:16:56.783 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19792], 99.95th=[20317], 00:16:56.783 | 99.99th=[20579] 00:16:56.783 write: IOPS=4320, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1002msec); 0 zone resets 00:16:56.783 slat (usec): min=7, max=5422, avg=116.70, stdev=619.70 00:16:56.783 clat (usec): min=287, max=20701, avg=15124.48, stdev=2041.52 00:16:56.783 lat (usec): min=3869, max=20718, avg=15241.18, stdev=2021.68 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[ 5080], 5.00th=[11469], 10.00th=[12256], 20.00th=[14222], 00:16:56.783 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:16:56.783 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17171], 00:16:56.783 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:16:56.783 | 99.99th=[20579] 00:16:56.783 bw ( KiB/s): min=16384, max=17224, per=33.34%, avg=16804.00, stdev=593.97, samples=2 00:16:56.783 iops : min= 4096, max= 4306, avg=4201.00, stdev=148.49, samples=2 00:16:56.783 lat (usec) : 500=0.01% 00:16:56.783 lat (msec) : 4=0.07%, 10=0.83%, 20=98.65%, 50=0.44% 00:16:56.783 cpu : usr=2.80%, sys=10.19%, ctx=384, majf=0, minf=13 00:16:56.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:56.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.783 issued rwts: total=4096,4329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.783 job1: (groupid=0, jobs=1): err= 0: pid=87441: Thu Dec 5 14:23:02 2024 00:16:56.783 read: IOPS=2150, BW=8604KiB/s (8810kB/s)(8664KiB/1007msec) 00:16:56.783 slat (usec): min=8, max=8905, avg=206.71, stdev=961.11 00:16:56.783 clat (usec): min=2639, max=38962, avg=25558.42, stdev=3974.15 00:16:56.783 lat (usec): min=7148, max=39085, avg=25765.13, stdev=4048.05 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[ 7701], 5.00th=[21365], 10.00th=[22938], 20.00th=[23725], 00:16:56.783 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:16:56.783 | 70.00th=[26870], 80.00th=[27395], 90.00th=[29754], 95.00th=[30802], 00:16:56.783 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:16:56.783 | 99.99th=[39060] 00:16:56.783 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:16:56.783 slat (usec): min=19, max=9509, avg=206.70, stdev=870.18 00:16:56.783 clat (usec): min=16385, max=43334, avg=27719.35, stdev=5874.98 00:16:56.783 lat (usec): min=16423, max=43391, avg=27926.05, stdev=5934.27 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[19792], 5.00th=[20317], 10.00th=[20579], 20.00th=[21103], 00:16:56.783 | 30.00th=[21890], 40.00th=[24249], 50.00th=[27919], 60.00th=[30540], 00:16:56.783 | 70.00th=[32375], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:16:56.783 | 99.00th=[39584], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:16:56.783 | 99.99th=[43254] 00:16:56.783 bw ( KiB/s): min= 8816, max=11607, per=20.26%, avg=10211.50, stdev=1973.54, samples=2 00:16:56.783 iops : min= 2204, max= 2901, avg=2552.50, stdev=492.85, samples=2 00:16:56.783 lat (msec) : 4=0.02%, 10=0.72%, 20=2.48%, 50=96.78% 00:16:56.783 cpu : usr=3.28%, sys=10.04%, ctx=272, majf=0, minf=7 00:16:56.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:56.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.783 issued rwts: total=2166,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.783 job2: (groupid=0, jobs=1): err= 0: pid=87442: Thu Dec 5 14:23:02 2024 00:16:56.783 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:16:56.783 slat (usec): min=5, max=7838, avg=129.32, stdev=762.97 00:16:56.783 clat (usec): min=6235, max=25674, avg=16451.63, stdev=2126.28 00:16:56.783 lat (usec): min=6307, max=26890, avg=16580.94, stdev=2225.98 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[11076], 5.00th=[13042], 10.00th=[13829], 20.00th=[15270], 00:16:56.783 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16450], 60.00th=[16581], 00:16:56.783 | 70.00th=[16909], 80.00th=[17695], 90.00th=[18744], 95.00th=[20055], 00:16:56.783 | 99.00th=[23725], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:16:56.783 | 99.99th=[25560] 00:16:56.783 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:16:56.783 slat (usec): min=11, max=8755, avg=125.36, stdev=677.25 00:16:56.783 clat (usec): min=3902, max=27940, avg=16863.99, stdev=2271.58 00:16:56.783 lat (usec): min=4865, max=27994, avg=16989.35, stdev=2304.74 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[ 8979], 5.00th=[12387], 10.00th=[15008], 20.00th=[15926], 00:16:56.783 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17433], 00:16:56.783 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19792], 00:16:56.783 | 99.00th=[23200], 99.50th=[24249], 99.90th=[25297], 99.95th=[25560], 00:16:56.783 | 99.99th=[27919] 00:16:56.783 bw ( KiB/s): min=14800, max=16416, per=30.96%, avg=15608.00, stdev=1142.68, samples=2 00:16:56.783 iops : min= 3700, max= 4104, avg=3902.00, stdev=285.67, samples=2 00:16:56.783 lat (msec) : 4=0.01%, 10=0.97%, 20=94.10%, 50=4.91% 00:16:56.783 cpu : usr=3.69%, sys=11.06%, ctx=358, majf=0, minf=15 00:16:56.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:56.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.783 issued rwts: total=3584,4026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.783 job3: (groupid=0, jobs=1): err= 0: pid=87443: Thu Dec 5 14:23:02 2024 00:16:56.783 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:16:56.783 slat (usec): min=9, max=9616, avg=303.88, stdev=1396.18 00:16:56.783 clat (usec): min=25065, max=57231, avg=37488.71, stdev=6986.26 00:16:56.783 lat (usec): min=27653, max=57244, avg=37792.59, stdev=6931.82 00:16:56.783 clat percentiles (usec): 00:16:56.783 | 1.00th=[26346], 5.00th=[28967], 10.00th=[32375], 20.00th=[33162], 00:16:56.783 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34866], 60.00th=[35914], 00:16:56.784 | 70.00th=[38011], 80.00th=[41681], 90.00th=[49021], 95.00th=[56361], 00:16:56.784 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:16:56.784 | 99.99th=[57410] 00:16:56.784 write: IOPS=1762, BW=7051KiB/s (7220kB/s)(7100KiB/1007msec); 0 zone resets 00:16:56.784 slat (usec): min=18, max=10031, avg=291.38, stdev=1046.20 00:16:56.784 clat (usec): min=6662, max=56603, avg=38503.14, stdev=8034.37 00:16:56.784 lat (usec): min=6689, max=56645, avg=38794.52, stdev=8023.34 00:16:56.784 clat percentiles (usec): 00:16:56.784 | 1.00th=[10159], 5.00th=[28705], 10.00th=[31589], 20.00th=[33424], 00:16:56.784 | 30.00th=[34341], 40.00th=[36439], 50.00th=[37487], 60.00th=[39060], 00:16:56.784 | 70.00th=[40633], 80.00th=[43779], 90.00th=[50594], 95.00th=[54789], 00:16:56.784 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:16:56.784 | 99.99th=[56361] 00:16:56.784 bw ( KiB/s): min= 6016, max= 7182, per=13.09%, avg=6599.00, stdev=824.49, samples=2 00:16:56.784 iops : min= 1504, max= 1795, avg=1649.50, stdev=205.77, samples=2 00:16:56.784 lat (msec) : 10=0.48%, 20=0.82%, 50=88.61%, 100=10.09% 00:16:56.784 cpu : usr=2.19%, sys=5.57%, ctx=241, majf=0, minf=15 00:16:56.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:56.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.784 issued rwts: total=1536,1775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.784 00:16:56.784 Run status group 0 (all jobs): 00:16:56.784 READ: bw=44.2MiB/s (46.3MB/s), 6101KiB/s-16.0MiB/s (6248kB/s-16.7MB/s), io=44.5MiB (46.6MB), run=1002-1007msec 00:16:56.784 WRITE: bw=49.2MiB/s (51.6MB/s), 7051KiB/s-16.9MiB/s (7220kB/s-17.7MB/s), io=49.6MiB (52.0MB), run=1002-1007msec 00:16:56.784 00:16:56.784 Disk stats (read/write): 00:16:56.784 nvme0n1: ios=3634/3682, merge=0/0, ticks=17324/16845, in_queue=34169, util=89.68% 00:16:56.784 nvme0n2: ios=2097/2159, merge=0/0, ticks=17118/16699, in_queue=33817, util=89.82% 00:16:56.784 nvme0n3: ios=3085/3511, merge=0/0, ticks=23234/26321, in_queue=49555, util=89.47% 00:16:56.784 nvme0n4: ios=1300/1536, merge=0/0, ticks=12320/14423, in_queue=26743, util=90.14% 00:16:56.784 14:23:02 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:56.784 [global] 00:16:56.784 thread=1 00:16:56.784 invalidate=1 00:16:56.784 rw=randwrite 00:16:56.784 time_based=1 00:16:56.784 runtime=1 00:16:56.784 ioengine=libaio 00:16:56.784 direct=1 00:16:56.784 bs=4096 00:16:56.784 iodepth=128 00:16:56.784 norandommap=0 00:16:56.784 numjobs=1 00:16:56.784 00:16:56.784 verify_dump=1 00:16:56.784 verify_backlog=512 00:16:56.784 verify_state_save=0 00:16:56.784 do_verify=1 00:16:56.784 verify=crc32c-intel 00:16:56.784 [job0] 00:16:56.784 filename=/dev/nvme0n1 00:16:56.784 [job1] 00:16:56.784 filename=/dev/nvme0n2 00:16:56.784 [job2] 00:16:56.784 filename=/dev/nvme0n3 00:16:56.784 [job3] 00:16:56.784 filename=/dev/nvme0n4 00:16:56.784 Could not set queue depth (nvme0n1) 00:16:56.784 Could not set queue depth (nvme0n2) 00:16:56.784 Could not set queue depth (nvme0n3) 00:16:56.784 Could not set queue depth (nvme0n4) 00:16:56.784 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.784 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.784 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.784 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.784 fio-3.35 00:16:56.784 Starting 4 threads 00:16:58.163 00:16:58.163 job0: (groupid=0, jobs=1): err= 0: pid=87496: Thu Dec 5 14:23:03 2024 00:16:58.163 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:16:58.163 slat (usec): min=5, max=13000, avg=112.48, stdev=777.37 00:16:58.163 clat (usec): min=5313, max=28097, avg=14956.04, stdev=3224.67 00:16:58.163 lat (usec): min=5326, max=28122, avg=15068.52, stdev=3276.88 00:16:58.163 clat percentiles (usec): 00:16:58.163 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11600], 20.00th=[12780], 00:16:58.163 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14877], 00:16:58.163 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19268], 95.00th=[21103], 00:16:58.163 | 99.00th=[26084], 99.50th=[27132], 99.90th=[27919], 99.95th=[28181], 00:16:58.163 | 99.99th=[28181] 00:16:58.163 write: IOPS=4547, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1010msec); 0 zone resets 00:16:58.163 slat (usec): min=5, max=13581, avg=111.79, stdev=827.19 00:16:58.163 clat (usec): min=2122, max=29243, avg=14511.47, stdev=2563.99 00:16:58.163 lat (usec): min=3610, max=29283, avg=14623.26, stdev=2679.60 00:16:58.163 clat percentiles (usec): 00:16:58.163 | 1.00th=[ 5800], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[12911], 00:16:58.163 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:16:58.163 | 70.00th=[15664], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:16:58.163 | 99.00th=[17957], 99.50th=[23725], 99.90th=[27919], 99.95th=[28967], 00:16:58.163 | 99.99th=[29230] 00:16:58.163 bw ( KiB/s): min=17739, max=18016, per=35.74%, avg=17877.50, stdev=195.87, samples=2 00:16:58.163 iops : min= 4434, max= 4504, avg=4469.00, stdev=49.50, samples=2 00:16:58.163 lat (msec) : 4=0.06%, 10=3.56%, 20=92.16%, 50=4.22% 00:16:58.163 cpu : usr=4.36%, sys=10.70%, ctx=409, majf=0, minf=9 00:16:58.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:58.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.163 issued rwts: total=4096,4593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.163 job1: (groupid=0, jobs=1): err= 0: pid=87497: Thu Dec 5 14:23:03 2024 00:16:58.163 read: IOPS=1748, BW=6992KiB/s (7160kB/s)(7076KiB/1012msec) 00:16:58.163 slat (usec): min=4, max=26921, avg=298.22, stdev=1943.56 00:16:58.163 clat (usec): min=6738, max=96276, avg=35599.12, stdev=15495.74 00:16:58.163 lat (usec): min=9028, max=96285, avg=35897.35, stdev=15585.28 00:16:58.163 clat percentiles (usec): 00:16:58.163 | 1.00th=[ 9634], 5.00th=[18744], 10.00th=[19268], 20.00th=[27395], 00:16:58.163 | 30.00th=[29230], 40.00th=[29754], 50.00th=[32637], 60.00th=[34866], 00:16:58.163 | 70.00th=[36439], 80.00th=[45351], 90.00th=[52691], 95.00th=[65799], 00:16:58.163 | 99.00th=[91751], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:16:58.163 | 99.99th=[95945] 00:16:58.163 write: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec); 0 zone resets 00:16:58.163 slat (usec): min=5, max=40710, avg=225.11, stdev=1504.23 00:16:58.163 clat (usec): min=4930, max=96183, avg=31869.35, stdev=8510.85 00:16:58.163 lat (usec): min=4955, max=96202, avg=32094.46, stdev=8597.34 00:16:58.163 clat percentiles (usec): 00:16:58.163 | 1.00th=[ 6980], 5.00th=[15401], 10.00th=[23725], 20.00th=[27657], 00:16:58.163 | 30.00th=[30016], 40.00th=[31851], 50.00th=[32113], 60.00th=[32900], 00:16:58.163 | 70.00th=[34341], 80.00th=[34341], 90.00th=[41681], 95.00th=[44303], 00:16:58.163 | 99.00th=[60556], 99.50th=[70779], 99.90th=[72877], 99.95th=[95945], 00:16:58.163 | 99.99th=[95945] 00:16:58.163 bw ( KiB/s): min= 8192, max= 8208, per=16.39%, avg=8200.00, stdev=11.31, samples=2 00:16:58.163 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:16:58.163 lat (msec) : 10=1.81%, 20=10.14%, 50=80.06%, 100=7.99% 00:16:58.163 cpu : usr=1.68%, sys=5.64%, ctx=364, majf=0, minf=13 00:16:58.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:16:58.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.163 issued rwts: total=1769,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.163 job2: (groupid=0, jobs=1): err= 0: pid=87498: Thu Dec 5 14:23:03 2024 00:16:58.163 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:16:58.163 slat (usec): min=5, max=23616, avg=199.09, stdev=1374.46 00:16:58.163 clat (usec): min=7217, max=60639, avg=24484.78, stdev=9751.33 00:16:58.163 lat (usec): min=7231, max=60654, avg=24683.88, stdev=9827.60 00:16:58.163 clat percentiles (usec): 00:16:58.163 | 1.00th=[11994], 5.00th=[12911], 10.00th=[13435], 20.00th=[15401], 00:16:58.163 | 30.00th=[17171], 40.00th=[20317], 50.00th=[24249], 60.00th=[26608], 00:16:58.163 | 70.00th=[27919], 80.00th=[30278], 90.00th=[39584], 95.00th=[44303], 00:16:58.163 | 99.00th=[54789], 99.50th=[58459], 99.90th=[60556], 99.95th=[60556], 00:16:58.163 | 99.99th=[60556] 00:16:58.163 write: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1015msec); 0 zone resets 00:16:58.163 slat (usec): min=4, max=25887, avg=172.25, stdev=1088.21 00:16:58.163 clat (usec): min=6300, max=60577, avg=24293.49, stdev=8736.74 00:16:58.163 lat (usec): min=6334, max=60588, avg=24465.74, stdev=8851.40 00:16:58.163 clat percentiles (usec): 00:16:58.163 | 1.00th=[ 7111], 5.00th=[13173], 10.00th=[14877], 20.00th=[15795], 00:16:58.163 | 30.00th=[17433], 40.00th=[18482], 50.00th=[22676], 60.00th=[29492], 00:16:58.163 | 70.00th=[32113], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:16:58.163 | 99.00th=[42730], 99.50th=[44827], 99.90th=[54264], 99.95th=[55313], 00:16:58.163 | 99.99th=[60556] 00:16:58.163 bw ( KiB/s): min= 8368, max=12360, per=20.72%, avg=10364.00, stdev=2822.77, samples=2 00:16:58.163 iops : min= 2092, max= 3090, avg=2591.00, stdev=705.69, samples=2 00:16:58.164 lat (msec) : 10=1.27%, 20=42.74%, 50=54.85%, 100=1.14% 00:16:58.164 cpu : usr=2.96%, sys=6.80%, ctx=291, majf=0, minf=11 00:16:58.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.164 issued rwts: total=2560,2718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.164 job3: (groupid=0, jobs=1): err= 0: pid=87499: Thu Dec 5 14:23:03 2024 00:16:58.164 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:16:58.164 slat (usec): min=6, max=16958, avg=159.43, stdev=1118.63 00:16:58.164 clat (usec): min=5731, max=49929, avg=20541.27, stdev=7062.63 00:16:58.164 lat (usec): min=5757, max=49945, avg=20700.69, stdev=7150.65 00:16:58.164 clat percentiles (usec): 00:16:58.164 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13829], 20.00th=[15533], 00:16:58.164 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17695], 60.00th=[19268], 00:16:58.164 | 70.00th=[21103], 80.00th=[26608], 90.00th=[33162], 95.00th=[34341], 00:16:58.164 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47973], 99.95th=[49546], 00:16:58.164 | 99.99th=[50070] 00:16:58.164 write: IOPS=3294, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1012msec); 0 zone resets 00:16:58.164 slat (usec): min=5, max=14811, avg=146.03, stdev=966.49 00:16:58.164 clat (usec): min=3321, max=43432, avg=19536.95, stdev=6907.97 00:16:58.164 lat (usec): min=3341, max=43468, avg=19682.98, stdev=6988.15 00:16:58.164 clat percentiles (usec): 00:16:58.164 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[14222], 20.00th=[16581], 00:16:58.164 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17695], 60.00th=[18220], 00:16:58.164 | 70.00th=[18744], 80.00th=[20317], 90.00th=[31065], 95.00th=[34866], 00:16:58.164 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:58.164 | 99.99th=[43254] 00:16:58.164 bw ( KiB/s): min= 9344, max=16336, per=25.67%, avg=12840.00, stdev=4944.09, samples=2 00:16:58.164 iops : min= 2336, max= 4084, avg=3210.00, stdev=1236.02, samples=2 00:16:58.164 lat (msec) : 4=0.09%, 10=3.53%, 20=67.08%, 50=29.30% 00:16:58.164 cpu : usr=3.36%, sys=8.90%, ctx=401, majf=0, minf=18 00:16:58.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.164 issued rwts: total=3072,3334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.164 00:16:58.164 Run status group 0 (all jobs): 00:16:58.164 READ: bw=44.2MiB/s (46.4MB/s), 6992KiB/s-15.8MiB/s (7160kB/s-16.6MB/s), io=44.9MiB (47.1MB), run=1010-1015msec 00:16:58.164 WRITE: bw=48.8MiB/s (51.2MB/s), 8095KiB/s-17.8MiB/s (8289kB/s-18.6MB/s), io=49.6MiB (52.0MB), run=1010-1015msec 00:16:58.164 00:16:58.164 Disk stats (read/write): 00:16:58.164 nvme0n1: ios=3634/3821, merge=0/0, ticks=50553/52695, in_queue=103248, util=88.88% 00:16:58.164 nvme0n2: ios=1584/1740, merge=0/0, ticks=46757/50911, in_queue=97668, util=89.38% 00:16:58.164 nvme0n3: ios=2031/2048, merge=0/0, ticks=52037/54211, in_queue=106248, util=89.28% 00:16:58.164 nvme0n4: ios=2735/3072, merge=0/0, ticks=46636/49446, in_queue=96082, util=89.72% 00:16:58.164 14:23:03 -- target/fio.sh@55 -- # sync 00:16:58.164 14:23:03 -- target/fio.sh@59 -- # fio_pid=87518 00:16:58.164 14:23:03 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:58.164 14:23:03 -- target/fio.sh@61 -- # sleep 3 00:16:58.164 [global] 00:16:58.164 thread=1 00:16:58.164 invalidate=1 00:16:58.164 rw=read 00:16:58.164 time_based=1 00:16:58.164 runtime=10 00:16:58.164 ioengine=libaio 00:16:58.164 direct=1 00:16:58.164 bs=4096 00:16:58.164 iodepth=1 00:16:58.164 norandommap=1 00:16:58.164 numjobs=1 00:16:58.164 00:16:58.164 [job0] 00:16:58.164 filename=/dev/nvme0n1 00:16:58.164 [job1] 00:16:58.164 filename=/dev/nvme0n2 00:16:58.164 [job2] 00:16:58.164 filename=/dev/nvme0n3 00:16:58.164 [job3] 00:16:58.164 filename=/dev/nvme0n4 00:16:58.164 Could not set queue depth (nvme0n1) 00:16:58.164 Could not set queue depth (nvme0n2) 00:16:58.164 Could not set queue depth (nvme0n3) 00:16:58.164 Could not set queue depth (nvme0n4) 00:16:58.424 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.424 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.424 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.424 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.424 fio-3.35 00:16:58.424 Starting 4 threads 00:17:01.711 14:23:06 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:01.711 fio: pid=87561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:01.711 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=53067776, buflen=4096 00:17:01.711 14:23:06 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:01.711 fio: pid=87560, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:01.711 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=32202752, buflen=4096 00:17:01.711 14:23:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.711 14:23:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:01.711 fio: pid=87558, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:01.711 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58425344, buflen=4096 00:17:01.971 14:23:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.971 14:23:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:01.971 fio: pid=87559, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:01.971 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43364352, buflen=4096 00:17:01.971 00:17:01.971 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87558: Thu Dec 5 14:23:07 2024 00:17:01.971 read: IOPS=4230, BW=16.5MiB/s (17.3MB/s)(55.7MiB/3372msec) 00:17:01.971 slat (usec): min=7, max=16874, avg=23.73, stdev=229.06 00:17:01.971 clat (usec): min=115, max=2343, avg=211.10, stdev=50.47 00:17:01.971 lat (usec): min=126, max=17047, avg=234.83, stdev=234.62 00:17:01.971 clat percentiles (usec): 00:17:01.971 | 1.00th=[ 143], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 184], 00:17:01.971 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:17:01.971 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 269], 00:17:01.971 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 709], 99.95th=[ 1156], 00:17:01.971 | 99.99th=[ 1745] 00:17:01.971 bw ( KiB/s): min=15760, max=17696, per=33.58%, avg=16974.33, stdev=765.67, samples=6 00:17:01.971 iops : min= 3940, max= 4424, avg=4243.50, stdev=191.51, samples=6 00:17:01.971 lat (usec) : 250=90.00%, 500=9.81%, 750=0.09%, 1000=0.04% 00:17:01.971 lat (msec) : 2=0.05%, 4=0.01% 00:17:01.971 cpu : usr=1.33%, sys=6.64%, ctx=14275, majf=0, minf=1 00:17:01.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.971 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.971 issued rwts: total=14265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.971 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87559: Thu Dec 5 14:23:07 2024 00:17:01.971 read: IOPS=2929, BW=11.4MiB/s (12.0MB/s)(41.4MiB/3614msec) 00:17:01.971 slat (usec): min=6, max=12651, avg=18.99, stdev=216.92 00:17:01.971 clat (usec): min=36, max=3065, avg=320.86, stdev=123.11 00:17:01.971 lat (usec): min=125, max=12920, avg=339.86, stdev=248.00 00:17:01.971 clat percentiles (usec): 00:17:01.971 | 1.00th=[ 124], 5.00th=[ 137], 10.00th=[ 155], 20.00th=[ 210], 00:17:01.971 | 30.00th=[ 281], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 363], 00:17:01.971 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 445], 00:17:01.971 | 99.00th=[ 570], 99.50th=[ 652], 99.90th=[ 1319], 99.95th=[ 2540], 00:17:01.971 | 99.99th=[ 3064] 00:17:01.971 bw ( KiB/s): min=10096, max=11144, per=20.89%, avg=10559.50, stdev=400.82, samples=6 00:17:01.971 iops : min= 2524, max= 2786, avg=2639.83, stdev=100.20, samples=6 00:17:01.971 lat (usec) : 50=0.01%, 250=26.83%, 500=70.98%, 750=1.86%, 1000=0.15% 00:17:01.971 lat (msec) : 2=0.09%, 4=0.07% 00:17:01.971 cpu : usr=1.05%, sys=3.54%, ctx=10609, majf=0, minf=1 00:17:01.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.971 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.971 issued rwts: total=10588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.971 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87560: Thu Dec 5 14:23:07 2024 00:17:01.971 read: IOPS=2488, BW=9952KiB/s (10.2MB/s)(30.7MiB/3160msec) 00:17:01.971 slat (usec): min=7, max=15751, avg=17.97, stdev=218.75 00:17:01.971 clat (usec): min=177, max=2958, avg=381.89, stdev=98.89 00:17:01.971 lat (usec): min=188, max=16019, avg=399.86, stdev=239.95 00:17:01.972 clat percentiles (usec): 00:17:01.972 | 1.00th=[ 200], 5.00th=[ 273], 10.00th=[ 302], 20.00th=[ 330], 00:17:01.972 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:17:01.972 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 465], 95.00th=[ 586], 00:17:01.972 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 1004], 99.95th=[ 1729], 00:17:01.972 | 99.99th=[ 2966] 00:17:01.972 bw ( KiB/s): min= 7368, max=10940, per=19.76%, avg=9988.67, stdev=1329.37, samples=6 00:17:01.972 iops : min= 1842, max= 2735, avg=2497.17, stdev=332.34, samples=6 00:17:01.972 lat (usec) : 250=3.29%, 500=88.52%, 750=7.92%, 1000=0.15% 00:17:01.972 lat (msec) : 2=0.06%, 4=0.04% 00:17:01.972 cpu : usr=1.14%, sys=2.98%, ctx=7866, majf=0, minf=2 00:17:01.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.972 issued rwts: total=7863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.972 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87561: Thu Dec 5 14:23:07 2024 00:17:01.972 read: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(50.6MiB/2948msec) 00:17:01.972 slat (usec): min=12, max=187, avg=17.04, stdev= 6.44 00:17:01.972 clat (usec): min=3, max=2765, avg=209.06, stdev=38.63 00:17:01.972 lat (usec): min=146, max=2779, avg=226.10, stdev=39.13 00:17:01.972 clat percentiles (usec): 00:17:01.972 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 186], 00:17:01.972 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:17:01.972 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 255], 00:17:01.972 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 371], 99.95th=[ 457], 00:17:01.972 | 99.99th=[ 1090] 00:17:01.972 bw ( KiB/s): min=17136, max=18000, per=34.88%, avg=17632.00, stdev=336.71, samples=5 00:17:01.972 iops : min= 4284, max= 4500, avg=4408.00, stdev=84.18, samples=5 00:17:01.972 lat (usec) : 4=0.01%, 250=92.91%, 500=7.03%, 750=0.02% 00:17:01.972 lat (msec) : 2=0.02%, 4=0.01% 00:17:01.972 cpu : usr=0.92%, sys=6.04%, ctx=12966, majf=0, minf=2 00:17:01.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.972 issued rwts: total=12957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.972 00:17:01.972 Run status group 0 (all jobs): 00:17:01.972 READ: bw=49.4MiB/s (51.8MB/s), 9952KiB/s-17.2MiB/s (10.2MB/s-18.0MB/s), io=178MiB (187MB), run=2948-3614msec 00:17:01.972 00:17:01.972 Disk stats (read/write): 00:17:01.972 nvme0n1: ios=14253/0, merge=0/0, ticks=3066/0, in_queue=3066, util=94.93% 00:17:01.972 nvme0n2: ios=9238/0, merge=0/0, ticks=3141/0, in_queue=3141, util=95.47% 00:17:01.972 nvme0n3: ios=7745/0, merge=0/0, ticks=2907/0, in_queue=2907, util=95.96% 00:17:01.972 nvme0n4: ios=12633/0, merge=0/0, ticks=2703/0, in_queue=2703, util=96.76% 00:17:01.972 14:23:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.972 14:23:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:02.231 14:23:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.231 14:23:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:02.490 14:23:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.490 14:23:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:02.749 14:23:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.749 14:23:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:03.008 14:23:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.008 14:23:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:03.266 14:23:08 -- target/fio.sh@69 -- # fio_status=0 00:17:03.266 14:23:08 -- target/fio.sh@70 -- # wait 87518 00:17:03.266 14:23:08 -- target/fio.sh@70 -- # fio_status=4 00:17:03.266 14:23:08 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.525 14:23:08 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.525 14:23:08 -- common/autotest_common.sh@1208 -- # local i=0 00:17:03.525 14:23:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:03.525 14:23:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.525 14:23:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:03.525 14:23:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.525 nvmf hotplug test: fio failed as expected 00:17:03.525 14:23:08 -- common/autotest_common.sh@1220 -- # return 0 00:17:03.525 14:23:08 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:03.525 14:23:08 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:03.525 14:23:08 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.784 14:23:09 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:03.784 14:23:09 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:03.784 14:23:09 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:03.785 14:23:09 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:03.785 14:23:09 -- target/fio.sh@91 -- # nvmftestfini 00:17:03.785 14:23:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:03.785 14:23:09 -- nvmf/common.sh@116 -- # sync 00:17:03.785 14:23:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:03.785 14:23:09 -- nvmf/common.sh@119 -- # set +e 00:17:03.785 14:23:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:03.785 14:23:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:03.785 rmmod nvme_tcp 00:17:03.785 rmmod nvme_fabrics 00:17:03.785 rmmod nvme_keyring 00:17:03.785 14:23:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:03.785 14:23:09 -- nvmf/common.sh@123 -- # set -e 00:17:03.785 14:23:09 -- nvmf/common.sh@124 -- # return 0 00:17:03.785 14:23:09 -- nvmf/common.sh@477 -- # '[' -n 87027 ']' 00:17:03.785 14:23:09 -- nvmf/common.sh@478 -- # killprocess 87027 00:17:03.785 14:23:09 -- common/autotest_common.sh@936 -- # '[' -z 87027 ']' 00:17:03.785 14:23:09 -- common/autotest_common.sh@940 -- # kill -0 87027 00:17:03.785 14:23:09 -- common/autotest_common.sh@941 -- # uname 00:17:03.785 14:23:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.785 14:23:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87027 00:17:03.785 killing process with pid 87027 00:17:03.785 14:23:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:03.785 14:23:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:03.785 14:23:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87027' 00:17:03.785 14:23:09 -- common/autotest_common.sh@955 -- # kill 87027 00:17:03.785 14:23:09 -- common/autotest_common.sh@960 -- # wait 87027 00:17:04.044 14:23:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:04.044 14:23:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:04.044 14:23:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:04.044 14:23:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.044 14:23:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:04.044 14:23:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.044 14:23:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.044 14:23:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.044 14:23:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:04.044 00:17:04.044 real 0m19.353s 00:17:04.044 user 1m14.698s 00:17:04.044 sys 0m7.523s 00:17:04.044 14:23:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:04.044 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:17:04.044 ************************************ 00:17:04.044 END TEST nvmf_fio_target 00:17:04.044 ************************************ 00:17:04.044 14:23:09 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:04.044 14:23:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:04.044 14:23:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.044 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:17:04.044 ************************************ 00:17:04.044 START TEST nvmf_bdevio 00:17:04.044 ************************************ 00:17:04.044 14:23:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:04.044 * Looking for test storage... 00:17:04.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:04.044 14:23:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:04.044 14:23:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:04.045 14:23:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:04.305 14:23:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:04.305 14:23:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:04.305 14:23:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:04.305 14:23:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:04.305 14:23:09 -- scripts/common.sh@335 -- # IFS=.-: 00:17:04.305 14:23:09 -- scripts/common.sh@335 -- # read -ra ver1 00:17:04.305 14:23:09 -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.305 14:23:09 -- scripts/common.sh@336 -- # read -ra ver2 00:17:04.305 14:23:09 -- scripts/common.sh@337 -- # local 'op=<' 00:17:04.305 14:23:09 -- scripts/common.sh@339 -- # ver1_l=2 00:17:04.305 14:23:09 -- scripts/common.sh@340 -- # ver2_l=1 00:17:04.305 14:23:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:04.305 14:23:09 -- scripts/common.sh@343 -- # case "$op" in 00:17:04.305 14:23:09 -- scripts/common.sh@344 -- # : 1 00:17:04.305 14:23:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:04.305 14:23:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.305 14:23:09 -- scripts/common.sh@364 -- # decimal 1 00:17:04.305 14:23:09 -- scripts/common.sh@352 -- # local d=1 00:17:04.305 14:23:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.305 14:23:09 -- scripts/common.sh@354 -- # echo 1 00:17:04.305 14:23:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:04.305 14:23:09 -- scripts/common.sh@365 -- # decimal 2 00:17:04.305 14:23:09 -- scripts/common.sh@352 -- # local d=2 00:17:04.305 14:23:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.305 14:23:09 -- scripts/common.sh@354 -- # echo 2 00:17:04.305 14:23:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:04.305 14:23:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:04.305 14:23:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:04.305 14:23:09 -- scripts/common.sh@367 -- # return 0 00:17:04.305 14:23:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.305 14:23:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.305 --rc genhtml_branch_coverage=1 00:17:04.305 --rc genhtml_function_coverage=1 00:17:04.305 --rc genhtml_legend=1 00:17:04.305 --rc geninfo_all_blocks=1 00:17:04.305 --rc geninfo_unexecuted_blocks=1 00:17:04.305 00:17:04.305 ' 00:17:04.305 14:23:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.305 --rc genhtml_branch_coverage=1 00:17:04.305 --rc genhtml_function_coverage=1 00:17:04.305 --rc genhtml_legend=1 00:17:04.305 --rc geninfo_all_blocks=1 00:17:04.305 --rc geninfo_unexecuted_blocks=1 00:17:04.305 00:17:04.305 ' 00:17:04.305 14:23:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.305 --rc genhtml_branch_coverage=1 00:17:04.305 --rc genhtml_function_coverage=1 00:17:04.305 --rc genhtml_legend=1 00:17:04.305 --rc geninfo_all_blocks=1 00:17:04.305 --rc geninfo_unexecuted_blocks=1 00:17:04.305 00:17:04.305 ' 00:17:04.305 14:23:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:04.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.305 --rc genhtml_branch_coverage=1 00:17:04.305 --rc genhtml_function_coverage=1 00:17:04.305 --rc genhtml_legend=1 00:17:04.305 --rc geninfo_all_blocks=1 00:17:04.305 --rc geninfo_unexecuted_blocks=1 00:17:04.305 00:17:04.305 ' 00:17:04.305 14:23:09 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.305 14:23:09 -- nvmf/common.sh@7 -- # uname -s 00:17:04.305 14:23:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.305 14:23:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.305 14:23:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.305 14:23:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.305 14:23:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.305 14:23:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.305 14:23:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.305 14:23:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.305 14:23:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.305 14:23:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.305 14:23:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:17:04.305 14:23:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:17:04.305 14:23:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.305 14:23:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.305 14:23:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.305 14:23:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.305 14:23:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.305 14:23:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.305 14:23:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.305 14:23:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.306 14:23:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.306 14:23:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.306 14:23:09 -- paths/export.sh@5 -- # export PATH 00:17:04.306 14:23:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.306 14:23:09 -- nvmf/common.sh@46 -- # : 0 00:17:04.306 14:23:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:04.306 14:23:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:04.306 14:23:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:04.306 14:23:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.306 14:23:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.306 14:23:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:04.306 14:23:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:04.306 14:23:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:04.306 14:23:09 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:04.306 14:23:09 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:04.306 14:23:09 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:04.306 14:23:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:04.306 14:23:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.306 14:23:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:04.306 14:23:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:04.306 14:23:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:04.306 14:23:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.306 14:23:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.306 14:23:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.306 14:23:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:04.306 14:23:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:04.306 14:23:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:04.306 14:23:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:04.306 14:23:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:04.306 14:23:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:04.306 14:23:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.306 14:23:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.306 14:23:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.306 14:23:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:04.306 14:23:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.306 14:23:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.306 14:23:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.306 14:23:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.306 14:23:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.306 14:23:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.306 14:23:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.306 14:23:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.306 14:23:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:04.306 14:23:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:04.306 Cannot find device "nvmf_tgt_br" 00:17:04.306 14:23:09 -- nvmf/common.sh@154 -- # true 00:17:04.306 14:23:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.306 Cannot find device "nvmf_tgt_br2" 00:17:04.306 14:23:09 -- nvmf/common.sh@155 -- # true 00:17:04.306 14:23:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:04.306 14:23:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:04.306 Cannot find device "nvmf_tgt_br" 00:17:04.306 14:23:09 -- nvmf/common.sh@157 -- # true 00:17:04.306 14:23:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:04.306 Cannot find device "nvmf_tgt_br2" 00:17:04.306 14:23:09 -- nvmf/common.sh@158 -- # true 00:17:04.306 14:23:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:04.306 14:23:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:04.306 14:23:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.566 14:23:09 -- nvmf/common.sh@161 -- # true 00:17:04.566 14:23:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.566 14:23:09 -- nvmf/common.sh@162 -- # true 00:17:04.566 14:23:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.566 14:23:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.566 14:23:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.566 14:23:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.566 14:23:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.566 14:23:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.566 14:23:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.566 14:23:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.566 14:23:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.566 14:23:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:04.566 14:23:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:04.566 14:23:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:04.566 14:23:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:04.566 14:23:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.566 14:23:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.566 14:23:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.566 14:23:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:04.566 14:23:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:04.566 14:23:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.566 14:23:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.566 14:23:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.566 14:23:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.566 14:23:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.566 14:23:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:04.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:17:04.566 00:17:04.566 --- 10.0.0.2 ping statistics --- 00:17:04.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.566 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:04.566 14:23:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:04.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:04.566 00:17:04.566 --- 10.0.0.3 ping statistics --- 00:17:04.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.566 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:04.566 14:23:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:04.566 00:17:04.566 --- 10.0.0.1 ping statistics --- 00:17:04.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.566 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:04.566 14:23:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.566 14:23:10 -- nvmf/common.sh@421 -- # return 0 00:17:04.566 14:23:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:04.566 14:23:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.566 14:23:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:04.566 14:23:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:04.566 14:23:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.566 14:23:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:04.566 14:23:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:04.566 14:23:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:04.566 14:23:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.566 14:23:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.566 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:17:04.566 14:23:10 -- nvmf/common.sh@469 -- # nvmfpid=87894 00:17:04.566 14:23:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:04.566 14:23:10 -- nvmf/common.sh@470 -- # waitforlisten 87894 00:17:04.566 14:23:10 -- common/autotest_common.sh@829 -- # '[' -z 87894 ']' 00:17:04.566 14:23:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.566 14:23:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.566 14:23:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.566 14:23:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.566 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:17:04.825 [2024-12-05 14:23:10.225385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.825 [2024-12-05 14:23:10.225482] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.825 [2024-12-05 14:23:10.366103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.825 [2024-12-05 14:23:10.424704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.825 [2024-12-05 14:23:10.425313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.825 [2024-12-05 14:23:10.425529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.825 [2024-12-05 14:23:10.425867] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.825 [2024-12-05 14:23:10.426322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:04.825 [2024-12-05 14:23:10.426381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:04.825 [2024-12-05 14:23:10.426522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:04.825 [2024-12-05 14:23:10.426775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.764 14:23:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.764 14:23:11 -- common/autotest_common.sh@862 -- # return 0 00:17:05.764 14:23:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.765 14:23:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.765 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 14:23:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.765 14:23:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.765 14:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.765 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 [2024-12-05 14:23:11.290577] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.765 14:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.765 14:23:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.765 14:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.765 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 Malloc0 00:17:05.765 14:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.765 14:23:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.765 14:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.765 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 14:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.765 14:23:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.765 14:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.765 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 14:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.765 14:23:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.765 14:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.765 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 [2024-12-05 14:23:11.375120] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.765 14:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.765 14:23:11 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:05.765 14:23:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:05.765 14:23:11 -- nvmf/common.sh@520 -- # config=() 00:17:05.765 14:23:11 -- nvmf/common.sh@520 -- # local subsystem config 00:17:05.765 14:23:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:05.765 14:23:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:05.765 { 00:17:05.765 "params": { 00:17:05.765 "name": "Nvme$subsystem", 00:17:05.765 "trtype": "$TEST_TRANSPORT", 00:17:05.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.765 "adrfam": "ipv4", 00:17:05.765 "trsvcid": "$NVMF_PORT", 00:17:05.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.765 "hdgst": ${hdgst:-false}, 00:17:05.765 "ddgst": ${ddgst:-false} 00:17:05.765 }, 00:17:05.765 "method": "bdev_nvme_attach_controller" 00:17:05.765 } 00:17:05.765 EOF 00:17:05.765 )") 00:17:05.765 14:23:11 -- nvmf/common.sh@542 -- # cat 00:17:05.765 14:23:11 -- nvmf/common.sh@544 -- # jq . 00:17:05.765 14:23:11 -- nvmf/common.sh@545 -- # IFS=, 00:17:05.765 14:23:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:05.765 "params": { 00:17:05.765 "name": "Nvme1", 00:17:05.765 "trtype": "tcp", 00:17:05.765 "traddr": "10.0.0.2", 00:17:05.765 "adrfam": "ipv4", 00:17:05.765 "trsvcid": "4420", 00:17:05.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.765 "hdgst": false, 00:17:05.765 "ddgst": false 00:17:05.765 }, 00:17:05.765 "method": "bdev_nvme_attach_controller" 00:17:05.765 }' 00:17:06.025 [2024-12-05 14:23:11.429673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:06.025 [2024-12-05 14:23:11.429739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87948 ] 00:17:06.025 [2024-12-05 14:23:11.572321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:06.025 [2024-12-05 14:23:11.665606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.025 [2024-12-05 14:23:11.665756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.025 [2024-12-05 14:23:11.665767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.283 [2024-12-05 14:23:11.901979] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:06.283 [2024-12-05 14:23:11.902041] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:06.283 I/O targets: 00:17:06.283 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:06.283 00:17:06.283 00:17:06.283 CUnit - A unit testing framework for C - Version 2.1-3 00:17:06.283 http://cunit.sourceforge.net/ 00:17:06.283 00:17:06.284 00:17:06.284 Suite: bdevio tests on: Nvme1n1 00:17:06.542 Test: blockdev write read block ...passed 00:17:06.542 Test: blockdev write zeroes read block ...passed 00:17:06.542 Test: blockdev write zeroes read no split ...passed 00:17:06.542 Test: blockdev write zeroes read split ...passed 00:17:06.542 Test: blockdev write zeroes read split partial ...passed 00:17:06.542 Test: blockdev reset ...[2024-12-05 14:23:12.020328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:06.542 [2024-12-05 14:23:12.020436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256ed0 (9): Bad file descriptor 00:17:06.542 passed 00:17:06.542 Test: blockdev write read 8 blocks ...[2024-12-05 14:23:12.031711] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:06.542 passed 00:17:06.542 Test: blockdev write read size > 128k ...passed 00:17:06.542 Test: blockdev write read invalid size ...passed 00:17:06.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.542 Test: blockdev write read max offset ...passed 00:17:06.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.542 Test: blockdev writev readv 8 blocks ...passed 00:17:06.542 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.802 Test: blockdev writev readv block ...passed 00:17:06.802 Test: blockdev writev readv size > 128k ...passed 00:17:06.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.802 Test: blockdev comparev and writev ...[2024-12-05 14:23:12.202369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.202416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.202434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.202443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.202778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.202799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.202842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.202869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.203230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.203245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.203260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.203270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.203571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.203586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.203600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.802 [2024-12-05 14:23:12.203608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:06.802 passed 00:17:06.802 Test: blockdev nvme passthru rw ...passed 00:17:06.802 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.802 Test: blockdev nvme admin passthru ...[2024-12-05 14:23:12.286120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.802 [2024-12-05 14:23:12.286149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.286306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.802 [2024-12-05 14:23:12.286322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.286455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.802 [2024-12-05 14:23:12.286470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 14:23:12.286577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.802 [2024-12-05 14:23:12.286597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:06.802 passed 00:17:06.802 Test: blockdev copy ...passed 00:17:06.802 00:17:06.802 Run Summary: Type Total Ran Passed Failed Inactive 00:17:06.802 suites 1 1 n/a 0 0 00:17:06.802 tests 23 23 23 0 0 00:17:06.802 asserts 152 152 152 0 n/a 00:17:06.802 00:17:06.802 Elapsed time = 0.881 seconds 00:17:07.062 14:23:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.062 14:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.062 14:23:12 -- common/autotest_common.sh@10 -- # set +x 00:17:07.062 14:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.062 14:23:12 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:07.062 14:23:12 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:07.062 14:23:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:07.062 14:23:12 -- nvmf/common.sh@116 -- # sync 00:17:07.322 14:23:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:07.322 14:23:12 -- nvmf/common.sh@119 -- # set +e 00:17:07.322 14:23:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:07.322 14:23:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:07.322 rmmod nvme_tcp 00:17:07.322 rmmod nvme_fabrics 00:17:07.322 rmmod nvme_keyring 00:17:07.322 14:23:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:07.322 14:23:12 -- nvmf/common.sh@123 -- # set -e 00:17:07.322 14:23:12 -- nvmf/common.sh@124 -- # return 0 00:17:07.322 14:23:12 -- nvmf/common.sh@477 -- # '[' -n 87894 ']' 00:17:07.322 14:23:12 -- nvmf/common.sh@478 -- # killprocess 87894 00:17:07.322 14:23:12 -- common/autotest_common.sh@936 -- # '[' -z 87894 ']' 00:17:07.322 14:23:12 -- common/autotest_common.sh@940 -- # kill -0 87894 00:17:07.322 14:23:12 -- common/autotest_common.sh@941 -- # uname 00:17:07.322 14:23:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.322 14:23:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87894 00:17:07.322 14:23:12 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:07.322 14:23:12 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:07.322 14:23:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87894' 00:17:07.322 killing process with pid 87894 00:17:07.322 14:23:12 -- common/autotest_common.sh@955 -- # kill 87894 00:17:07.322 14:23:12 -- common/autotest_common.sh@960 -- # wait 87894 00:17:07.582 14:23:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:07.582 14:23:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:07.582 14:23:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:07.582 14:23:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.582 14:23:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:07.582 14:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.582 14:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.582 14:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.582 14:23:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:07.582 ************************************ 00:17:07.582 END TEST nvmf_bdevio 00:17:07.582 ************************************ 00:17:07.582 00:17:07.582 real 0m3.563s 00:17:07.582 user 0m13.226s 00:17:07.582 sys 0m0.903s 00:17:07.582 14:23:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:07.582 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 14:23:13 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:17:07.582 14:23:13 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:07.582 14:23:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:07.582 14:23:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.582 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 ************************************ 00:17:07.582 START TEST nvmf_bdevio_no_huge 00:17:07.582 ************************************ 00:17:07.582 14:23:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:07.842 * Looking for test storage... 00:17:07.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:07.842 14:23:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:07.842 14:23:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:07.842 14:23:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:07.842 14:23:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:07.842 14:23:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:07.842 14:23:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:07.842 14:23:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:07.842 14:23:13 -- scripts/common.sh@335 -- # IFS=.-: 00:17:07.842 14:23:13 -- scripts/common.sh@335 -- # read -ra ver1 00:17:07.842 14:23:13 -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.842 14:23:13 -- scripts/common.sh@336 -- # read -ra ver2 00:17:07.842 14:23:13 -- scripts/common.sh@337 -- # local 'op=<' 00:17:07.842 14:23:13 -- scripts/common.sh@339 -- # ver1_l=2 00:17:07.842 14:23:13 -- scripts/common.sh@340 -- # ver2_l=1 00:17:07.842 14:23:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:07.842 14:23:13 -- scripts/common.sh@343 -- # case "$op" in 00:17:07.842 14:23:13 -- scripts/common.sh@344 -- # : 1 00:17:07.842 14:23:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:07.842 14:23:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.842 14:23:13 -- scripts/common.sh@364 -- # decimal 1 00:17:07.842 14:23:13 -- scripts/common.sh@352 -- # local d=1 00:17:07.842 14:23:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.842 14:23:13 -- scripts/common.sh@354 -- # echo 1 00:17:07.842 14:23:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:07.842 14:23:13 -- scripts/common.sh@365 -- # decimal 2 00:17:07.842 14:23:13 -- scripts/common.sh@352 -- # local d=2 00:17:07.842 14:23:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.842 14:23:13 -- scripts/common.sh@354 -- # echo 2 00:17:07.842 14:23:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:07.842 14:23:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:07.842 14:23:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:07.842 14:23:13 -- scripts/common.sh@367 -- # return 0 00:17:07.842 14:23:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.842 14:23:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.842 --rc genhtml_branch_coverage=1 00:17:07.842 --rc genhtml_function_coverage=1 00:17:07.842 --rc genhtml_legend=1 00:17:07.842 --rc geninfo_all_blocks=1 00:17:07.842 --rc geninfo_unexecuted_blocks=1 00:17:07.842 00:17:07.842 ' 00:17:07.842 14:23:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.842 --rc genhtml_branch_coverage=1 00:17:07.842 --rc genhtml_function_coverage=1 00:17:07.842 --rc genhtml_legend=1 00:17:07.842 --rc geninfo_all_blocks=1 00:17:07.842 --rc geninfo_unexecuted_blocks=1 00:17:07.842 00:17:07.842 ' 00:17:07.842 14:23:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.842 --rc genhtml_branch_coverage=1 00:17:07.842 --rc genhtml_function_coverage=1 00:17:07.842 --rc genhtml_legend=1 00:17:07.842 --rc geninfo_all_blocks=1 00:17:07.842 --rc geninfo_unexecuted_blocks=1 00:17:07.842 00:17:07.842 ' 00:17:07.842 14:23:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.842 --rc genhtml_branch_coverage=1 00:17:07.842 --rc genhtml_function_coverage=1 00:17:07.842 --rc genhtml_legend=1 00:17:07.842 --rc geninfo_all_blocks=1 00:17:07.842 --rc geninfo_unexecuted_blocks=1 00:17:07.842 00:17:07.842 ' 00:17:07.842 14:23:13 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.842 14:23:13 -- nvmf/common.sh@7 -- # uname -s 00:17:07.842 14:23:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.842 14:23:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.842 14:23:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.842 14:23:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.842 14:23:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.842 14:23:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.842 14:23:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.842 14:23:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.842 14:23:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.842 14:23:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.842 14:23:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:17:07.842 14:23:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:17:07.842 14:23:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.843 14:23:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.843 14:23:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.843 14:23:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.843 14:23:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.843 14:23:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.843 14:23:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.843 14:23:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.843 14:23:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.843 14:23:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.843 14:23:13 -- paths/export.sh@5 -- # export PATH 00:17:07.843 14:23:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.843 14:23:13 -- nvmf/common.sh@46 -- # : 0 00:17:07.843 14:23:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.843 14:23:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.843 14:23:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.843 14:23:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.843 14:23:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.843 14:23:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.843 14:23:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.843 14:23:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.843 14:23:13 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.843 14:23:13 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.843 14:23:13 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:07.843 14:23:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:07.843 14:23:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.843 14:23:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.843 14:23:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.843 14:23:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.843 14:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.843 14:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.843 14:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.843 14:23:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:07.843 14:23:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:07.843 14:23:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:07.843 14:23:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:07.843 14:23:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:07.843 14:23:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:07.843 14:23:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.843 14:23:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.843 14:23:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:07.843 14:23:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:07.843 14:23:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.843 14:23:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.843 14:23:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.843 14:23:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.843 14:23:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.843 14:23:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.843 14:23:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.843 14:23:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.843 14:23:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:07.843 14:23:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:07.843 Cannot find device "nvmf_tgt_br" 00:17:07.843 14:23:13 -- nvmf/common.sh@154 -- # true 00:17:07.843 14:23:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.843 Cannot find device "nvmf_tgt_br2" 00:17:07.843 14:23:13 -- nvmf/common.sh@155 -- # true 00:17:07.843 14:23:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:07.843 14:23:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:07.843 Cannot find device "nvmf_tgt_br" 00:17:07.843 14:23:13 -- nvmf/common.sh@157 -- # true 00:17:07.843 14:23:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:08.102 Cannot find device "nvmf_tgt_br2" 00:17:08.102 14:23:13 -- nvmf/common.sh@158 -- # true 00:17:08.102 14:23:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:08.102 14:23:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:08.102 14:23:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.102 14:23:13 -- nvmf/common.sh@161 -- # true 00:17:08.102 14:23:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.102 14:23:13 -- nvmf/common.sh@162 -- # true 00:17:08.102 14:23:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:08.102 14:23:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:08.102 14:23:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:08.102 14:23:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:08.102 14:23:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:08.102 14:23:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:08.102 14:23:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:08.102 14:23:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:08.102 14:23:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:08.102 14:23:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:08.102 14:23:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:08.102 14:23:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:08.102 14:23:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:08.102 14:23:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:08.102 14:23:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:08.103 14:23:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:08.103 14:23:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:08.103 14:23:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:08.103 14:23:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:08.103 14:23:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:08.103 14:23:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:08.103 14:23:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:08.103 14:23:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:08.103 14:23:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:08.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:08.103 00:17:08.103 --- 10.0.0.2 ping statistics --- 00:17:08.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.103 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:08.103 14:23:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:08.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:08.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:17:08.103 00:17:08.103 --- 10.0.0.3 ping statistics --- 00:17:08.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.103 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:08.103 14:23:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:08.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:08.103 00:17:08.103 --- 10.0.0.1 ping statistics --- 00:17:08.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.103 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:08.103 14:23:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.103 14:23:13 -- nvmf/common.sh@421 -- # return 0 00:17:08.103 14:23:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.103 14:23:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.103 14:23:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.103 14:23:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.103 14:23:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.103 14:23:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.103 14:23:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.369 14:23:13 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:08.369 14:23:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.369 14:23:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.369 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:17:08.369 14:23:13 -- nvmf/common.sh@469 -- # nvmfpid=88149 00:17:08.369 14:23:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:08.369 14:23:13 -- nvmf/common.sh@470 -- # waitforlisten 88149 00:17:08.369 14:23:13 -- common/autotest_common.sh@829 -- # '[' -z 88149 ']' 00:17:08.369 14:23:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.369 14:23:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.369 14:23:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.369 14:23:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.369 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:17:08.369 [2024-12-05 14:23:13.831090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.369 [2024-12-05 14:23:13.831176] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:08.369 [2024-12-05 14:23:13.975335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.629 [2024-12-05 14:23:14.071722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.629 [2024-12-05 14:23:14.071889] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.629 [2024-12-05 14:23:14.071902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.629 [2024-12-05 14:23:14.071911] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.629 [2024-12-05 14:23:14.072456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.629 [2024-12-05 14:23:14.072548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:08.629 [2024-12-05 14:23:14.072709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:08.629 [2024-12-05 14:23:14.072715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.197 14:23:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.197 14:23:14 -- common/autotest_common.sh@862 -- # return 0 00:17:09.197 14:23:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.197 14:23:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.197 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 14:23:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.455 14:23:14 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:09.455 14:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.455 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 [2024-12-05 14:23:14.873281] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.455 14:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.455 14:23:14 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:09.455 14:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.455 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 Malloc0 00:17:09.455 14:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.455 14:23:14 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:09.455 14:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.455 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 14:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.455 14:23:14 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.455 14:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.455 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 14:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.455 14:23:14 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.455 14:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.455 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 [2024-12-05 14:23:14.911531] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.455 14:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.455 14:23:14 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:09.455 14:23:14 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:09.455 14:23:14 -- nvmf/common.sh@520 -- # config=() 00:17:09.455 14:23:14 -- nvmf/common.sh@520 -- # local subsystem config 00:17:09.455 14:23:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:09.455 14:23:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:09.455 { 00:17:09.455 "params": { 00:17:09.455 "name": "Nvme$subsystem", 00:17:09.455 "trtype": "$TEST_TRANSPORT", 00:17:09.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.455 "adrfam": "ipv4", 00:17:09.455 "trsvcid": "$NVMF_PORT", 00:17:09.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.455 "hdgst": ${hdgst:-false}, 00:17:09.455 "ddgst": ${ddgst:-false} 00:17:09.455 }, 00:17:09.455 "method": "bdev_nvme_attach_controller" 00:17:09.455 } 00:17:09.455 EOF 00:17:09.455 )") 00:17:09.455 14:23:14 -- nvmf/common.sh@542 -- # cat 00:17:09.456 14:23:14 -- nvmf/common.sh@544 -- # jq . 00:17:09.456 14:23:14 -- nvmf/common.sh@545 -- # IFS=, 00:17:09.456 14:23:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:09.456 "params": { 00:17:09.456 "name": "Nvme1", 00:17:09.456 "trtype": "tcp", 00:17:09.456 "traddr": "10.0.0.2", 00:17:09.456 "adrfam": "ipv4", 00:17:09.456 "trsvcid": "4420", 00:17:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.456 "hdgst": false, 00:17:09.456 "ddgst": false 00:17:09.456 }, 00:17:09.456 "method": "bdev_nvme_attach_controller" 00:17:09.456 }' 00:17:09.456 [2024-12-05 14:23:14.971176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:09.456 [2024-12-05 14:23:14.971262] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88203 ] 00:17:09.714 [2024-12-05 14:23:15.114085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.714 [2024-12-05 14:23:15.222379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.714 [2024-12-05 14:23:15.222514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.714 [2024-12-05 14:23:15.222518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.974 [2024-12-05 14:23:15.406074] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:09.974 [2024-12-05 14:23:15.406112] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:09.974 I/O targets: 00:17:09.974 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:09.974 00:17:09.974 00:17:09.974 CUnit - A unit testing framework for C - Version 2.1-3 00:17:09.974 http://cunit.sourceforge.net/ 00:17:09.974 00:17:09.974 00:17:09.974 Suite: bdevio tests on: Nvme1n1 00:17:09.974 Test: blockdev write read block ...passed 00:17:09.974 Test: blockdev write zeroes read block ...passed 00:17:09.974 Test: blockdev write zeroes read no split ...passed 00:17:09.974 Test: blockdev write zeroes read split ...passed 00:17:09.974 Test: blockdev write zeroes read split partial ...passed 00:17:09.974 Test: blockdev reset ...[2024-12-05 14:23:15.533506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:09.974 [2024-12-05 14:23:15.533596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1632820 (9): Bad file descriptor 00:17:09.974 [2024-12-05 14:23:15.544410] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:09.974 passed 00:17:09.974 Test: blockdev write read 8 blocks ...passed 00:17:09.974 Test: blockdev write read size > 128k ...passed 00:17:09.974 Test: blockdev write read invalid size ...passed 00:17:09.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:09.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:09.974 Test: blockdev write read max offset ...passed 00:17:10.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.232 Test: blockdev writev readv 8 blocks ...passed 00:17:10.232 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.232 Test: blockdev writev readv block ...passed 00:17:10.232 Test: blockdev writev readv size > 128k ...passed 00:17:10.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.232 Test: blockdev comparev and writev ...[2024-12-05 14:23:15.719238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.719269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.719287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.719297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.719789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.719851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.719860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.720571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.720592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.720606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.720616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.721194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.721215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:10.232 [2024-12-05 14:23:15.721229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.232 [2024-12-05 14:23:15.721237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:10.232 passed 00:17:10.233 Test: blockdev nvme passthru rw ...passed 00:17:10.233 Test: blockdev nvme passthru vendor specific ...[2024-12-05 14:23:15.804179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.233 [2024-12-05 14:23:15.804210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:10.233 [2024-12-05 14:23:15.804507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.233 [2024-12-05 14:23:15.804528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:10.233 [2024-12-05 14:23:15.804730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.233 [2024-12-05 14:23:15.804750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:10.233 [2024-12-05 14:23:15.804987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.233 [2024-12-05 14:23:15.805008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:10.233 passed 00:17:10.233 Test: blockdev nvme admin passthru ...passed 00:17:10.233 Test: blockdev copy ...passed 00:17:10.233 00:17:10.233 Run Summary: Type Total Ran Passed Failed Inactive 00:17:10.233 suites 1 1 n/a 0 0 00:17:10.233 tests 23 23 23 0 0 00:17:10.233 asserts 152 152 152 0 n/a 00:17:10.233 00:17:10.233 Elapsed time = 0.919 seconds 00:17:10.800 14:23:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.800 14:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.800 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:17:10.800 14:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.800 14:23:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:10.800 14:23:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:10.800 14:23:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:10.800 14:23:16 -- nvmf/common.sh@116 -- # sync 00:17:10.800 14:23:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:10.800 14:23:16 -- nvmf/common.sh@119 -- # set +e 00:17:10.800 14:23:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:10.800 14:23:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:10.800 rmmod nvme_tcp 00:17:10.800 rmmod nvme_fabrics 00:17:10.800 rmmod nvme_keyring 00:17:10.800 14:23:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:10.800 14:23:16 -- nvmf/common.sh@123 -- # set -e 00:17:10.800 14:23:16 -- nvmf/common.sh@124 -- # return 0 00:17:10.800 14:23:16 -- nvmf/common.sh@477 -- # '[' -n 88149 ']' 00:17:10.800 14:23:16 -- nvmf/common.sh@478 -- # killprocess 88149 00:17:10.800 14:23:16 -- common/autotest_common.sh@936 -- # '[' -z 88149 ']' 00:17:10.800 14:23:16 -- common/autotest_common.sh@940 -- # kill -0 88149 00:17:10.800 14:23:16 -- common/autotest_common.sh@941 -- # uname 00:17:10.800 14:23:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.800 14:23:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88149 00:17:10.800 14:23:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:10.800 14:23:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:10.800 killing process with pid 88149 00:17:10.800 14:23:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88149' 00:17:10.800 14:23:16 -- common/autotest_common.sh@955 -- # kill 88149 00:17:10.800 14:23:16 -- common/autotest_common.sh@960 -- # wait 88149 00:17:11.368 14:23:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:11.368 14:23:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:11.368 14:23:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:11.368 14:23:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.368 14:23:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:11.368 14:23:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.368 14:23:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.368 14:23:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.368 14:23:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:11.368 ************************************ 00:17:11.368 END TEST nvmf_bdevio_no_huge 00:17:11.368 ************************************ 00:17:11.368 00:17:11.368 real 0m3.584s 00:17:11.368 user 0m12.857s 00:17:11.368 sys 0m1.347s 00:17:11.368 14:23:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:11.368 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.368 14:23:16 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:11.368 14:23:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:11.368 14:23:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.368 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:17:11.368 ************************************ 00:17:11.368 START TEST nvmf_tls 00:17:11.368 ************************************ 00:17:11.368 14:23:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:11.368 * Looking for test storage... 00:17:11.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:11.368 14:23:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:11.368 14:23:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:11.368 14:23:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:11.627 14:23:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:11.627 14:23:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:11.627 14:23:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:11.627 14:23:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:11.627 14:23:17 -- scripts/common.sh@335 -- # IFS=.-: 00:17:11.627 14:23:17 -- scripts/common.sh@335 -- # read -ra ver1 00:17:11.627 14:23:17 -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.627 14:23:17 -- scripts/common.sh@336 -- # read -ra ver2 00:17:11.627 14:23:17 -- scripts/common.sh@337 -- # local 'op=<' 00:17:11.627 14:23:17 -- scripts/common.sh@339 -- # ver1_l=2 00:17:11.627 14:23:17 -- scripts/common.sh@340 -- # ver2_l=1 00:17:11.627 14:23:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:11.627 14:23:17 -- scripts/common.sh@343 -- # case "$op" in 00:17:11.627 14:23:17 -- scripts/common.sh@344 -- # : 1 00:17:11.627 14:23:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:11.627 14:23:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.627 14:23:17 -- scripts/common.sh@364 -- # decimal 1 00:17:11.627 14:23:17 -- scripts/common.sh@352 -- # local d=1 00:17:11.627 14:23:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.627 14:23:17 -- scripts/common.sh@354 -- # echo 1 00:17:11.627 14:23:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:11.627 14:23:17 -- scripts/common.sh@365 -- # decimal 2 00:17:11.627 14:23:17 -- scripts/common.sh@352 -- # local d=2 00:17:11.627 14:23:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.627 14:23:17 -- scripts/common.sh@354 -- # echo 2 00:17:11.627 14:23:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:11.627 14:23:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:11.627 14:23:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:11.627 14:23:17 -- scripts/common.sh@367 -- # return 0 00:17:11.627 14:23:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.627 14:23:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:11.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.627 --rc genhtml_branch_coverage=1 00:17:11.627 --rc genhtml_function_coverage=1 00:17:11.627 --rc genhtml_legend=1 00:17:11.627 --rc geninfo_all_blocks=1 00:17:11.627 --rc geninfo_unexecuted_blocks=1 00:17:11.627 00:17:11.627 ' 00:17:11.627 14:23:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:11.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.628 --rc genhtml_branch_coverage=1 00:17:11.628 --rc genhtml_function_coverage=1 00:17:11.628 --rc genhtml_legend=1 00:17:11.628 --rc geninfo_all_blocks=1 00:17:11.628 --rc geninfo_unexecuted_blocks=1 00:17:11.628 00:17:11.628 ' 00:17:11.628 14:23:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.628 --rc genhtml_branch_coverage=1 00:17:11.628 --rc genhtml_function_coverage=1 00:17:11.628 --rc genhtml_legend=1 00:17:11.628 --rc geninfo_all_blocks=1 00:17:11.628 --rc geninfo_unexecuted_blocks=1 00:17:11.628 00:17:11.628 ' 00:17:11.628 14:23:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.628 --rc genhtml_branch_coverage=1 00:17:11.628 --rc genhtml_function_coverage=1 00:17:11.628 --rc genhtml_legend=1 00:17:11.628 --rc geninfo_all_blocks=1 00:17:11.628 --rc geninfo_unexecuted_blocks=1 00:17:11.628 00:17:11.628 ' 00:17:11.628 14:23:17 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.628 14:23:17 -- nvmf/common.sh@7 -- # uname -s 00:17:11.628 14:23:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.628 14:23:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.628 14:23:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.628 14:23:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.628 14:23:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.628 14:23:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.628 14:23:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.628 14:23:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.628 14:23:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.628 14:23:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.628 14:23:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:17:11.628 14:23:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:17:11.628 14:23:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.628 14:23:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.628 14:23:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.628 14:23:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.628 14:23:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.628 14:23:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.628 14:23:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.628 14:23:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.628 14:23:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.628 14:23:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.628 14:23:17 -- paths/export.sh@5 -- # export PATH 00:17:11.628 14:23:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.628 14:23:17 -- nvmf/common.sh@46 -- # : 0 00:17:11.628 14:23:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:11.628 14:23:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:11.628 14:23:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:11.628 14:23:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.628 14:23:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.628 14:23:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:11.628 14:23:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:11.628 14:23:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:11.628 14:23:17 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.628 14:23:17 -- target/tls.sh@71 -- # nvmftestinit 00:17:11.628 14:23:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:11.628 14:23:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.628 14:23:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:11.628 14:23:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:11.628 14:23:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:11.628 14:23:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.628 14:23:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.628 14:23:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.628 14:23:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:11.628 14:23:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:11.628 14:23:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:11.628 14:23:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:11.628 14:23:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:11.628 14:23:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:11.628 14:23:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.628 14:23:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.628 14:23:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.628 14:23:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:11.628 14:23:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.628 14:23:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.628 14:23:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.628 14:23:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.628 14:23:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.628 14:23:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.628 14:23:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.628 14:23:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.628 14:23:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:11.628 14:23:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:11.628 Cannot find device "nvmf_tgt_br" 00:17:11.628 14:23:17 -- nvmf/common.sh@154 -- # true 00:17:11.628 14:23:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.628 Cannot find device "nvmf_tgt_br2" 00:17:11.628 14:23:17 -- nvmf/common.sh@155 -- # true 00:17:11.628 14:23:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:11.628 14:23:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:11.628 Cannot find device "nvmf_tgt_br" 00:17:11.628 14:23:17 -- nvmf/common.sh@157 -- # true 00:17:11.628 14:23:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:11.628 Cannot find device "nvmf_tgt_br2" 00:17:11.628 14:23:17 -- nvmf/common.sh@158 -- # true 00:17:11.628 14:23:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:11.628 14:23:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:11.628 14:23:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.628 14:23:17 -- nvmf/common.sh@161 -- # true 00:17:11.628 14:23:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.628 14:23:17 -- nvmf/common.sh@162 -- # true 00:17:11.628 14:23:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.628 14:23:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.628 14:23:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.628 14:23:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.628 14:23:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.628 14:23:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.628 14:23:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.628 14:23:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.628 14:23:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.628 14:23:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:11.887 14:23:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:11.887 14:23:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:11.887 14:23:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:11.887 14:23:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.887 14:23:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.887 14:23:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.887 14:23:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:11.887 14:23:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:11.887 14:23:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.887 14:23:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.887 14:23:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.887 14:23:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.887 14:23:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.888 14:23:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:11.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:17:11.888 00:17:11.888 --- 10.0.0.2 ping statistics --- 00:17:11.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.888 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:11.888 14:23:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:11.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:17:11.888 00:17:11.888 --- 10.0.0.3 ping statistics --- 00:17:11.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.888 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:11.888 14:23:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:11.888 00:17:11.888 --- 10.0.0.1 ping statistics --- 00:17:11.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.888 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:11.888 14:23:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.888 14:23:17 -- nvmf/common.sh@421 -- # return 0 00:17:11.888 14:23:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:11.888 14:23:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.888 14:23:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:11.888 14:23:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:11.888 14:23:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.888 14:23:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:11.888 14:23:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:11.888 14:23:17 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:11.888 14:23:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:11.888 14:23:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.888 14:23:17 -- common/autotest_common.sh@10 -- # set +x 00:17:11.888 14:23:17 -- nvmf/common.sh@469 -- # nvmfpid=88391 00:17:11.888 14:23:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:11.888 14:23:17 -- nvmf/common.sh@470 -- # waitforlisten 88391 00:17:11.888 14:23:17 -- common/autotest_common.sh@829 -- # '[' -z 88391 ']' 00:17:11.888 14:23:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.888 14:23:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.888 14:23:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.888 14:23:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.888 14:23:17 -- common/autotest_common.sh@10 -- # set +x 00:17:11.888 [2024-12-05 14:23:17.473174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:11.888 [2024-12-05 14:23:17.473258] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.147 [2024-12-05 14:23:17.617424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.147 [2024-12-05 14:23:17.707723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:12.147 [2024-12-05 14:23:17.707918] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.147 [2024-12-05 14:23:17.707937] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.147 [2024-12-05 14:23:17.707950] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.147 [2024-12-05 14:23:17.707993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.084 14:23:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.084 14:23:18 -- common/autotest_common.sh@862 -- # return 0 00:17:13.084 14:23:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.084 14:23:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.084 14:23:18 -- common/autotest_common.sh@10 -- # set +x 00:17:13.084 14:23:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.084 14:23:18 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:13.084 14:23:18 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:13.343 true 00:17:13.343 14:23:18 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.343 14:23:18 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:13.602 14:23:18 -- target/tls.sh@82 -- # version=0 00:17:13.602 14:23:18 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:13.602 14:23:18 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.602 14:23:19 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.602 14:23:19 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:13.860 14:23:19 -- target/tls.sh@90 -- # version=13 00:17:13.860 14:23:19 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:13.860 14:23:19 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:14.119 14:23:19 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.119 14:23:19 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:14.379 14:23:19 -- target/tls.sh@98 -- # version=7 00:17:14.379 14:23:19 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:14.379 14:23:19 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.379 14:23:19 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:14.638 14:23:20 -- target/tls.sh@105 -- # ktls=false 00:17:14.638 14:23:20 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:14.638 14:23:20 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:14.903 14:23:20 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.903 14:23:20 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:14.903 14:23:20 -- target/tls.sh@113 -- # ktls=true 00:17:14.903 14:23:20 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:14.903 14:23:20 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:15.203 14:23:20 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.203 14:23:20 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:15.483 14:23:21 -- target/tls.sh@121 -- # ktls=false 00:17:15.483 14:23:21 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:15.483 14:23:21 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:15.483 14:23:21 -- target/tls.sh@49 -- # local key hash crc 00:17:15.483 14:23:21 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:15.483 14:23:21 -- target/tls.sh@51 -- # hash=01 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # gzip -1 -c 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # tail -c8 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # head -c 4 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # crc='p$H�' 00:17:15.483 14:23:21 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:15.483 14:23:21 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:15.483 14:23:21 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.483 14:23:21 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.483 14:23:21 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:15.483 14:23:21 -- target/tls.sh@49 -- # local key hash crc 00:17:15.483 14:23:21 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:15.483 14:23:21 -- target/tls.sh@51 -- # hash=01 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # gzip -1 -c 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # tail -c8 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # head -c 4 00:17:15.483 14:23:21 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:15.483 14:23:21 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:15.483 14:23:21 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:15.483 14:23:21 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.483 14:23:21 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.483 14:23:21 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.483 14:23:21 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:15.483 14:23:21 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.483 14:23:21 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.483 14:23:21 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.483 14:23:21 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:15.483 14:23:21 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:15.742 14:23:21 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:16.308 14:23:21 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.308 14:23:21 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.308 14:23:21 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.308 [2024-12-05 14:23:21.918951] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.308 14:23:21 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.567 14:23:22 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.825 [2024-12-05 14:23:22.326981] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.825 [2024-12-05 14:23:22.327237] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.825 14:23:22 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.084 malloc0 00:17:17.084 14:23:22 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.343 14:23:22 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.601 14:23:23 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.577 Initializing NVMe Controllers 00:17:27.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:27.577 Initialization complete. Launching workers. 00:17:27.577 ======================================================== 00:17:27.577 Latency(us) 00:17:27.577 Device Information : IOPS MiB/s Average min max 00:17:27.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11827.14 46.20 5412.15 1610.97 16478.75 00:17:27.577 ======================================================== 00:17:27.577 Total : 11827.14 46.20 5412.15 1610.97 16478.75 00:17:27.577 00:17:27.577 14:23:33 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.577 14:23:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.577 14:23:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.577 14:23:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.577 14:23:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:27.577 14:23:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.577 14:23:33 -- target/tls.sh@28 -- # bdevperf_pid=88762 00:17:27.577 14:23:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.577 14:23:33 -- target/tls.sh@31 -- # waitforlisten 88762 /var/tmp/bdevperf.sock 00:17:27.577 14:23:33 -- common/autotest_common.sh@829 -- # '[' -z 88762 ']' 00:17:27.577 14:23:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.577 14:23:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.577 14:23:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.577 14:23:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.577 14:23:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.577 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 [2024-12-05 14:23:33.275797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:27.837 [2024-12-05 14:23:33.275920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88762 ] 00:17:27.837 [2024-12-05 14:23:33.417563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.097 [2024-12-05 14:23:33.496996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.666 14:23:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.666 14:23:34 -- common/autotest_common.sh@862 -- # return 0 00:17:28.666 14:23:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:28.925 [2024-12-05 14:23:34.454798] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.925 TLSTESTn1 00:17:28.925 14:23:34 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:29.184 Running I/O for 10 seconds... 00:17:39.159 00:17:39.159 Latency(us) 00:17:39.159 [2024-12-05T14:23:44.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.159 [2024-12-05T14:23:44.807Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:39.159 Verification LBA range: start 0x0 length 0x2000 00:17:39.159 TLSTESTn1 : 10.01 6961.35 27.19 0.00 0.00 18360.02 3872.58 26691.03 00:17:39.159 [2024-12-05T14:23:44.807Z] =================================================================================================================== 00:17:39.159 [2024-12-05T14:23:44.807Z] Total : 6961.35 27.19 0.00 0.00 18360.02 3872.58 26691.03 00:17:39.159 0 00:17:39.159 14:23:44 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.159 14:23:44 -- target/tls.sh@45 -- # killprocess 88762 00:17:39.159 14:23:44 -- common/autotest_common.sh@936 -- # '[' -z 88762 ']' 00:17:39.159 14:23:44 -- common/autotest_common.sh@940 -- # kill -0 88762 00:17:39.159 14:23:44 -- common/autotest_common.sh@941 -- # uname 00:17:39.159 14:23:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.159 14:23:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88762 00:17:39.159 killing process with pid 88762 00:17:39.159 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.159 00:17:39.159 Latency(us) 00:17:39.159 [2024-12-05T14:23:44.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.159 [2024-12-05T14:23:44.807Z] =================================================================================================================== 00:17:39.159 [2024-12-05T14:23:44.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.159 14:23:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.159 14:23:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.159 14:23:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88762' 00:17:39.159 14:23:44 -- common/autotest_common.sh@955 -- # kill 88762 00:17:39.159 14:23:44 -- common/autotest_common.sh@960 -- # wait 88762 00:17:39.418 14:23:44 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.418 14:23:44 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.418 14:23:44 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.418 14:23:44 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:39.418 14:23:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.418 14:23:44 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:39.418 14:23:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.418 14:23:44 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.418 14:23:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.418 14:23:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.418 14:23:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.418 14:23:44 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:39.418 14:23:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.418 14:23:44 -- target/tls.sh@28 -- # bdevperf_pid=88909 00:17:39.418 14:23:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.418 14:23:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.418 14:23:44 -- target/tls.sh@31 -- # waitforlisten 88909 /var/tmp/bdevperf.sock 00:17:39.418 14:23:44 -- common/autotest_common.sh@829 -- # '[' -z 88909 ']' 00:17:39.418 14:23:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.418 14:23:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.418 14:23:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.418 14:23:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.418 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:17:39.418 [2024-12-05 14:23:45.034241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:39.418 [2024-12-05 14:23:45.034349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88909 ] 00:17:39.676 [2024-12-05 14:23:45.168790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.676 [2024-12-05 14:23:45.238020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.609 14:23:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.609 14:23:45 -- common/autotest_common.sh@862 -- # return 0 00:17:40.609 14:23:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:40.609 [2024-12-05 14:23:46.141321] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.609 [2024-12-05 14:23:46.146099] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.609 [2024-12-05 14:23:46.146667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275cc0 (107): Transport endpoint is not connected 00:17:40.609 [2024-12-05 14:23:46.147656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275cc0 (9): Bad file descriptor 00:17:40.609 [2024-12-05 14:23:46.148653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:40.609 [2024-12-05 14:23:46.148671] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.609 [2024-12-05 14:23:46.148680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.609 2024/12/05 14:23:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:40.609 request: 00:17:40.609 { 00:17:40.609 "method": "bdev_nvme_attach_controller", 00:17:40.609 "params": { 00:17:40.609 "name": "TLSTEST", 00:17:40.609 "trtype": "tcp", 00:17:40.609 "traddr": "10.0.0.2", 00:17:40.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.609 "adrfam": "ipv4", 00:17:40.609 "trsvcid": "4420", 00:17:40.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.609 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:40.609 } 00:17:40.609 } 00:17:40.609 Got JSON-RPC error response 00:17:40.609 GoRPCClient: error on JSON-RPC call 00:17:40.609 14:23:46 -- target/tls.sh@36 -- # killprocess 88909 00:17:40.609 14:23:46 -- common/autotest_common.sh@936 -- # '[' -z 88909 ']' 00:17:40.609 14:23:46 -- common/autotest_common.sh@940 -- # kill -0 88909 00:17:40.609 14:23:46 -- common/autotest_common.sh@941 -- # uname 00:17:40.609 14:23:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.609 14:23:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88909 00:17:40.609 killing process with pid 88909 00:17:40.609 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.609 00:17:40.609 Latency(us) 00:17:40.609 [2024-12-05T14:23:46.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.609 [2024-12-05T14:23:46.257Z] =================================================================================================================== 00:17:40.609 [2024-12-05T14:23:46.257Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.609 14:23:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.609 14:23:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.609 14:23:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88909' 00:17:40.609 14:23:46 -- common/autotest_common.sh@955 -- # kill 88909 00:17:40.609 14:23:46 -- common/autotest_common.sh@960 -- # wait 88909 00:17:40.868 14:23:46 -- target/tls.sh@37 -- # return 1 00:17:40.868 14:23:46 -- common/autotest_common.sh@653 -- # es=1 00:17:40.868 14:23:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.868 14:23:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.868 14:23:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.868 14:23:46 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.868 14:23:46 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.868 14:23:46 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.868 14:23:46 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:40.868 14:23:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.868 14:23:46 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:40.868 14:23:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.868 14:23:46 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.868 14:23:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.868 14:23:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.868 14:23:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:40.868 14:23:46 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:40.868 14:23:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.868 14:23:46 -- target/tls.sh@28 -- # bdevperf_pid=88959 00:17:40.868 14:23:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.868 14:23:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.868 14:23:46 -- target/tls.sh@31 -- # waitforlisten 88959 /var/tmp/bdevperf.sock 00:17:40.868 14:23:46 -- common/autotest_common.sh@829 -- # '[' -z 88959 ']' 00:17:40.868 14:23:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.868 14:23:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.868 14:23:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.868 14:23:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.868 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:17:40.868 [2024-12-05 14:23:46.511158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.868 [2024-12-05 14:23:46.511250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88959 ] 00:17:41.127 [2024-12-05 14:23:46.643675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.127 [2024-12-05 14:23:46.706158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.065 14:23:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.065 14:23:47 -- common/autotest_common.sh@862 -- # return 0 00:17:42.065 14:23:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.065 [2024-12-05 14:23:47.682995] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.065 [2024-12-05 14:23:47.687657] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:42.065 [2024-12-05 14:23:47.687690] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:42.065 [2024-12-05 14:23:47.687735] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:42.065 [2024-12-05 14:23:47.688402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd24cc0 (107): Transport endpoint is not connected 00:17:42.065 [2024-12-05 14:23:47.689390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd24cc0 (9): Bad file descriptor 00:17:42.065 [2024-12-05 14:23:47.690386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:42.065 [2024-12-05 14:23:47.690403] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:42.065 [2024-12-05 14:23:47.690412] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:42.065 2024/12/05 14:23:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:42.065 request: 00:17:42.065 { 00:17:42.065 "method": "bdev_nvme_attach_controller", 00:17:42.065 "params": { 00:17:42.065 "name": "TLSTEST", 00:17:42.065 "trtype": "tcp", 00:17:42.065 "traddr": "10.0.0.2", 00:17:42.065 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:42.065 "adrfam": "ipv4", 00:17:42.065 "trsvcid": "4420", 00:17:42.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.065 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:42.065 } 00:17:42.065 } 00:17:42.065 Got JSON-RPC error response 00:17:42.065 GoRPCClient: error on JSON-RPC call 00:17:42.065 14:23:47 -- target/tls.sh@36 -- # killprocess 88959 00:17:42.065 14:23:47 -- common/autotest_common.sh@936 -- # '[' -z 88959 ']' 00:17:42.065 14:23:47 -- common/autotest_common.sh@940 -- # kill -0 88959 00:17:42.065 14:23:47 -- common/autotest_common.sh@941 -- # uname 00:17:42.325 14:23:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.325 14:23:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88959 00:17:42.325 14:23:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:42.325 14:23:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:42.325 14:23:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88959' 00:17:42.325 killing process with pid 88959 00:17:42.325 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.325 00:17:42.325 Latency(us) 00:17:42.325 [2024-12-05T14:23:47.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.325 [2024-12-05T14:23:47.973Z] =================================================================================================================== 00:17:42.325 [2024-12-05T14:23:47.973Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.325 14:23:47 -- common/autotest_common.sh@955 -- # kill 88959 00:17:42.325 14:23:47 -- common/autotest_common.sh@960 -- # wait 88959 00:17:42.584 14:23:47 -- target/tls.sh@37 -- # return 1 00:17:42.584 14:23:47 -- common/autotest_common.sh@653 -- # es=1 00:17:42.584 14:23:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.584 14:23:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.584 14:23:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.584 14:23:47 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.584 14:23:47 -- common/autotest_common.sh@650 -- # local es=0 00:17:42.584 14:23:47 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.584 14:23:47 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:42.584 14:23:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.584 14:23:47 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:42.584 14:23:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.584 14:23:47 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:42.584 14:23:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.584 14:23:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:42.584 14:23:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.584 14:23:47 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:42.584 14:23:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.584 14:23:47 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.584 14:23:47 -- target/tls.sh@28 -- # bdevperf_pid=89000 00:17:42.584 14:23:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.584 14:23:47 -- target/tls.sh@31 -- # waitforlisten 89000 /var/tmp/bdevperf.sock 00:17:42.584 14:23:47 -- common/autotest_common.sh@829 -- # '[' -z 89000 ']' 00:17:42.584 14:23:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.584 14:23:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.584 14:23:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.584 14:23:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.584 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:17:42.584 [2024-12-05 14:23:48.040479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:42.584 [2024-12-05 14:23:48.040707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89000 ] 00:17:42.584 [2024-12-05 14:23:48.167022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.843 [2024-12-05 14:23:48.239629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.411 14:23:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.411 14:23:48 -- common/autotest_common.sh@862 -- # return 0 00:17:43.411 14:23:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:43.671 [2024-12-05 14:23:49.236415] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.671 [2024-12-05 14:23:49.245709] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.671 [2024-12-05 14:23:49.245746] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.671 [2024-12-05 14:23:49.245791] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:43.671 [2024-12-05 14:23:49.246054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1acc0 (107): Transport endpoint is not connected 00:17:43.671 [2024-12-05 14:23:49.247045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1acc0 (9): Bad file descriptor 00:17:43.671 [2024-12-05 14:23:49.248042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:43.671 [2024-12-05 14:23:49.248066] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:43.671 [2024-12-05 14:23:49.248083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:43.671 2024/12/05 14:23:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:43.671 request: 00:17:43.671 { 00:17:43.671 "method": "bdev_nvme_attach_controller", 00:17:43.671 "params": { 00:17:43.671 "name": "TLSTEST", 00:17:43.671 "trtype": "tcp", 00:17:43.671 "traddr": "10.0.0.2", 00:17:43.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.671 "adrfam": "ipv4", 00:17:43.671 "trsvcid": "4420", 00:17:43.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:43.671 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:43.671 } 00:17:43.671 } 00:17:43.671 Got JSON-RPC error response 00:17:43.671 GoRPCClient: error on JSON-RPC call 00:17:43.671 14:23:49 -- target/tls.sh@36 -- # killprocess 89000 00:17:43.671 14:23:49 -- common/autotest_common.sh@936 -- # '[' -z 89000 ']' 00:17:43.671 14:23:49 -- common/autotest_common.sh@940 -- # kill -0 89000 00:17:43.671 14:23:49 -- common/autotest_common.sh@941 -- # uname 00:17:43.671 14:23:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.671 14:23:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89000 00:17:43.671 14:23:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:43.671 14:23:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:43.671 killing process with pid 89000 00:17:43.671 14:23:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89000' 00:17:43.671 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.671 00:17:43.671 Latency(us) 00:17:43.671 [2024-12-05T14:23:49.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.671 [2024-12-05T14:23:49.319Z] =================================================================================================================== 00:17:43.671 [2024-12-05T14:23:49.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.671 14:23:49 -- common/autotest_common.sh@955 -- # kill 89000 00:17:43.671 14:23:49 -- common/autotest_common.sh@960 -- # wait 89000 00:17:43.931 14:23:49 -- target/tls.sh@37 -- # return 1 00:17:43.931 14:23:49 -- common/autotest_common.sh@653 -- # es=1 00:17:43.931 14:23:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.931 14:23:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.931 14:23:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.931 14:23:49 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.931 14:23:49 -- common/autotest_common.sh@650 -- # local es=0 00:17:43.931 14:23:49 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.931 14:23:49 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:43.931 14:23:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.931 14:23:49 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:43.931 14:23:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.931 14:23:49 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.931 14:23:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.931 14:23:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.931 14:23:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.931 14:23:49 -- target/tls.sh@23 -- # psk= 00:17:43.931 14:23:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.931 14:23:49 -- target/tls.sh@28 -- # bdevperf_pid=89046 00:17:43.931 14:23:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.931 14:23:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.931 14:23:49 -- target/tls.sh@31 -- # waitforlisten 89046 /var/tmp/bdevperf.sock 00:17:43.931 14:23:49 -- common/autotest_common.sh@829 -- # '[' -z 89046 ']' 00:17:43.932 14:23:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.932 14:23:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.932 14:23:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.932 14:23:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.932 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:17:44.191 [2024-12-05 14:23:49.616295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.191 [2024-12-05 14:23:49.616396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89046 ] 00:17:44.191 [2024-12-05 14:23:49.753426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.191 [2024-12-05 14:23:49.819323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.127 14:23:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.127 14:23:50 -- common/autotest_common.sh@862 -- # return 0 00:17:45.127 14:23:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:45.127 [2024-12-05 14:23:50.755998] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:45.127 [2024-12-05 14:23:50.757933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16698c0 (9): Bad file descriptor 00:17:45.127 [2024-12-05 14:23:50.758928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.127 [2024-12-05 14:23:50.758958] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:45.127 [2024-12-05 14:23:50.758969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.127 2024/12/05 14:23:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:45.127 request: 00:17:45.127 { 00:17:45.127 "method": "bdev_nvme_attach_controller", 00:17:45.127 "params": { 00:17:45.127 "name": "TLSTEST", 00:17:45.127 "trtype": "tcp", 00:17:45.127 "traddr": "10.0.0.2", 00:17:45.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.127 "adrfam": "ipv4", 00:17:45.127 "trsvcid": "4420", 00:17:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:45.127 } 00:17:45.127 } 00:17:45.127 Got JSON-RPC error response 00:17:45.127 GoRPCClient: error on JSON-RPC call 00:17:45.386 14:23:50 -- target/tls.sh@36 -- # killprocess 89046 00:17:45.386 14:23:50 -- common/autotest_common.sh@936 -- # '[' -z 89046 ']' 00:17:45.386 14:23:50 -- common/autotest_common.sh@940 -- # kill -0 89046 00:17:45.386 14:23:50 -- common/autotest_common.sh@941 -- # uname 00:17:45.386 14:23:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.386 14:23:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89046 00:17:45.386 14:23:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:45.386 14:23:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:45.386 killing process with pid 89046 00:17:45.386 14:23:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89046' 00:17:45.386 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.386 00:17:45.386 Latency(us) 00:17:45.386 [2024-12-05T14:23:51.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.386 [2024-12-05T14:23:51.034Z] =================================================================================================================== 00:17:45.386 [2024-12-05T14:23:51.034Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.386 14:23:50 -- common/autotest_common.sh@955 -- # kill 89046 00:17:45.386 14:23:50 -- common/autotest_common.sh@960 -- # wait 89046 00:17:45.645 14:23:51 -- target/tls.sh@37 -- # return 1 00:17:45.645 14:23:51 -- common/autotest_common.sh@653 -- # es=1 00:17:45.645 14:23:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.645 14:23:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.645 14:23:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.645 14:23:51 -- target/tls.sh@167 -- # killprocess 88391 00:17:45.645 14:23:51 -- common/autotest_common.sh@936 -- # '[' -z 88391 ']' 00:17:45.645 14:23:51 -- common/autotest_common.sh@940 -- # kill -0 88391 00:17:45.645 14:23:51 -- common/autotest_common.sh@941 -- # uname 00:17:45.645 14:23:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.645 14:23:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88391 00:17:45.645 14:23:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.645 14:23:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.645 killing process with pid 88391 00:17:45.645 14:23:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88391' 00:17:45.645 14:23:51 -- common/autotest_common.sh@955 -- # kill 88391 00:17:45.645 14:23:51 -- common/autotest_common.sh@960 -- # wait 88391 00:17:45.904 14:23:51 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:45.904 14:23:51 -- target/tls.sh@49 -- # local key hash crc 00:17:45.904 14:23:51 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:45.904 14:23:51 -- target/tls.sh@51 -- # hash=02 00:17:45.904 14:23:51 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:45.904 14:23:51 -- target/tls.sh@52 -- # gzip -1 -c 00:17:45.904 14:23:51 -- target/tls.sh@52 -- # tail -c8 00:17:45.904 14:23:51 -- target/tls.sh@52 -- # head -c 4 00:17:45.904 14:23:51 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:45.904 14:23:51 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:45.904 14:23:51 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:45.904 14:23:51 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.904 14:23:51 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.904 14:23:51 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.904 14:23:51 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.904 14:23:51 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.904 14:23:51 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:45.904 14:23:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.904 14:23:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.904 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:45.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.904 14:23:51 -- nvmf/common.sh@469 -- # nvmfpid=89112 00:17:45.904 14:23:51 -- nvmf/common.sh@470 -- # waitforlisten 89112 00:17:45.904 14:23:51 -- common/autotest_common.sh@829 -- # '[' -z 89112 ']' 00:17:45.904 14:23:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:45.904 14:23:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.904 14:23:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.904 14:23:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.905 14:23:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.905 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:17:45.905 [2024-12-05 14:23:51.475335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.905 [2024-12-05 14:23:51.475431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.164 [2024-12-05 14:23:51.613082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.164 [2024-12-05 14:23:51.683015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.164 [2024-12-05 14:23:51.683155] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.164 [2024-12-05 14:23:51.683168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.164 [2024-12-05 14:23:51.683176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.164 [2024-12-05 14:23:51.683203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.100 14:23:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.100 14:23:52 -- common/autotest_common.sh@862 -- # return 0 00:17:47.100 14:23:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:47.100 14:23:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.100 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:17:47.100 14:23:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.100 14:23:52 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.100 14:23:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.100 14:23:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:47.100 [2024-12-05 14:23:52.621391] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.100 14:23:52 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:47.359 14:23:52 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:47.617 [2024-12-05 14:23:53.025479] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.617 [2024-12-05 14:23:53.025697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.617 14:23:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:47.617 malloc0 00:17:47.617 14:23:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.876 14:23:53 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.135 14:23:53 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.135 14:23:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.135 14:23:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.135 14:23:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.135 14:23:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:48.135 14:23:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.135 14:23:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.135 14:23:53 -- target/tls.sh@28 -- # bdevperf_pid=89209 00:17:48.135 14:23:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.135 14:23:53 -- target/tls.sh@31 -- # waitforlisten 89209 /var/tmp/bdevperf.sock 00:17:48.135 14:23:53 -- common/autotest_common.sh@829 -- # '[' -z 89209 ']' 00:17:48.135 14:23:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.135 14:23:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.135 14:23:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.135 14:23:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.135 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:17:48.135 [2024-12-05 14:23:53.734857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:48.135 [2024-12-05 14:23:53.734958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89209 ] 00:17:48.396 [2024-12-05 14:23:53.863190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.396 [2024-12-05 14:23:53.933822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.331 14:23:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.331 14:23:54 -- common/autotest_common.sh@862 -- # return 0 00:17:49.331 14:23:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.331 [2024-12-05 14:23:54.926437] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.588 TLSTESTn1 00:17:49.588 14:23:55 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.588 Running I/O for 10 seconds... 00:17:59.564 00:17:59.564 Latency(us) 00:17:59.564 [2024-12-05T14:24:05.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.564 [2024-12-05T14:24:05.212Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.564 Verification LBA range: start 0x0 length 0x2000 00:17:59.564 TLSTESTn1 : 10.01 6831.19 26.68 0.00 0.00 18708.91 3440.64 18826.71 00:17:59.564 [2024-12-05T14:24:05.212Z] =================================================================================================================== 00:17:59.564 [2024-12-05T14:24:05.212Z] Total : 6831.19 26.68 0.00 0.00 18708.91 3440.64 18826.71 00:17:59.564 0 00:17:59.564 14:24:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.564 14:24:05 -- target/tls.sh@45 -- # killprocess 89209 00:17:59.564 14:24:05 -- common/autotest_common.sh@936 -- # '[' -z 89209 ']' 00:17:59.564 14:24:05 -- common/autotest_common.sh@940 -- # kill -0 89209 00:17:59.564 14:24:05 -- common/autotest_common.sh@941 -- # uname 00:17:59.564 14:24:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.564 14:24:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89209 00:17:59.564 14:24:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:59.564 14:24:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:59.564 killing process with pid 89209 00:17:59.564 14:24:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89209' 00:17:59.564 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.564 00:17:59.564 Latency(us) 00:17:59.564 [2024-12-05T14:24:05.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.564 [2024-12-05T14:24:05.212Z] =================================================================================================================== 00:17:59.564 [2024-12-05T14:24:05.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.564 14:24:05 -- common/autotest_common.sh@955 -- # kill 89209 00:17:59.564 14:24:05 -- common/autotest_common.sh@960 -- # wait 89209 00:17:59.838 14:24:05 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.839 14:24:05 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.839 14:24:05 -- common/autotest_common.sh@650 -- # local es=0 00:17:59.839 14:24:05 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.839 14:24:05 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:59.839 14:24:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.839 14:24:05 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:59.839 14:24:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.839 14:24:05 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.839 14:24:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.839 14:24:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.839 14:24:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.839 14:24:05 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:59.839 14:24:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.839 14:24:05 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.839 14:24:05 -- target/tls.sh@28 -- # bdevperf_pid=89356 00:17:59.839 14:24:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.839 14:24:05 -- target/tls.sh@31 -- # waitforlisten 89356 /var/tmp/bdevperf.sock 00:17:59.839 14:24:05 -- common/autotest_common.sh@829 -- # '[' -z 89356 ']' 00:17:59.839 14:24:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.839 14:24:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.839 14:24:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.839 14:24:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.839 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:18:00.121 [2024-12-05 14:24:05.493279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:00.121 [2024-12-05 14:24:05.493360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89356 ] 00:18:00.121 [2024-12-05 14:24:05.618127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.121 [2024-12-05 14:24:05.686930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.069 14:24:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.069 14:24:06 -- common/autotest_common.sh@862 -- # return 0 00:18:01.069 14:24:06 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:01.069 [2024-12-05 14:24:06.628864] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.069 [2024-12-05 14:24:06.628917] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:01.069 2024/12/05 14:24:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:01.069 request: 00:18:01.069 { 00:18:01.069 "method": "bdev_nvme_attach_controller", 00:18:01.069 "params": { 00:18:01.069 "name": "TLSTEST", 00:18:01.069 "trtype": "tcp", 00:18:01.069 "traddr": "10.0.0.2", 00:18:01.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.069 "adrfam": "ipv4", 00:18:01.069 "trsvcid": "4420", 00:18:01.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.069 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:01.069 } 00:18:01.069 } 00:18:01.069 Got JSON-RPC error response 00:18:01.069 GoRPCClient: error on JSON-RPC call 00:18:01.069 14:24:06 -- target/tls.sh@36 -- # killprocess 89356 00:18:01.069 14:24:06 -- common/autotest_common.sh@936 -- # '[' -z 89356 ']' 00:18:01.069 14:24:06 -- common/autotest_common.sh@940 -- # kill -0 89356 00:18:01.069 14:24:06 -- common/autotest_common.sh@941 -- # uname 00:18:01.069 14:24:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.069 14:24:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89356 00:18:01.069 killing process with pid 89356 00:18:01.069 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.069 00:18:01.069 Latency(us) 00:18:01.069 [2024-12-05T14:24:06.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.069 [2024-12-05T14:24:06.717Z] =================================================================================================================== 00:18:01.069 [2024-12-05T14:24:06.717Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.069 14:24:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:01.069 14:24:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:01.069 14:24:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89356' 00:18:01.069 14:24:06 -- common/autotest_common.sh@955 -- # kill 89356 00:18:01.069 14:24:06 -- common/autotest_common.sh@960 -- # wait 89356 00:18:01.328 14:24:06 -- target/tls.sh@37 -- # return 1 00:18:01.328 14:24:06 -- common/autotest_common.sh@653 -- # es=1 00:18:01.328 14:24:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.328 14:24:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.328 14:24:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.328 14:24:06 -- target/tls.sh@183 -- # killprocess 89112 00:18:01.328 14:24:06 -- common/autotest_common.sh@936 -- # '[' -z 89112 ']' 00:18:01.328 14:24:06 -- common/autotest_common.sh@940 -- # kill -0 89112 00:18:01.328 14:24:06 -- common/autotest_common.sh@941 -- # uname 00:18:01.328 14:24:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.328 14:24:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89112 00:18:01.328 killing process with pid 89112 00:18:01.328 14:24:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.328 14:24:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.328 14:24:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89112' 00:18:01.328 14:24:06 -- common/autotest_common.sh@955 -- # kill 89112 00:18:01.328 14:24:06 -- common/autotest_common.sh@960 -- # wait 89112 00:18:01.896 14:24:07 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:01.896 14:24:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.896 14:24:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.896 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:18:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.896 14:24:07 -- nvmf/common.sh@469 -- # nvmfpid=89411 00:18:01.896 14:24:07 -- nvmf/common.sh@470 -- # waitforlisten 89411 00:18:01.896 14:24:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.897 14:24:07 -- common/autotest_common.sh@829 -- # '[' -z 89411 ']' 00:18:01.897 14:24:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.897 14:24:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.897 14:24:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.897 14:24:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.897 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:18:01.897 [2024-12-05 14:24:07.329248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.897 [2024-12-05 14:24:07.329362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.897 [2024-12-05 14:24:07.468472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.897 [2024-12-05 14:24:07.537241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.897 [2024-12-05 14:24:07.537400] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.897 [2024-12-05 14:24:07.537428] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.897 [2024-12-05 14:24:07.537436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.897 [2024-12-05 14:24:07.537469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.874 14:24:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.874 14:24:08 -- common/autotest_common.sh@862 -- # return 0 00:18:02.874 14:24:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.874 14:24:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.874 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:18:02.874 14:24:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.874 14:24:08 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.874 14:24:08 -- common/autotest_common.sh@650 -- # local es=0 00:18:02.874 14:24:08 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.874 14:24:08 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:02.874 14:24:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.874 14:24:08 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:02.874 14:24:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.874 14:24:08 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.874 14:24:08 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.874 14:24:08 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:03.132 [2024-12-05 14:24:08.541258] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.132 14:24:08 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.132 14:24:08 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.390 [2024-12-05 14:24:08.929305] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.390 [2024-12-05 14:24:08.929523] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.390 14:24:08 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.648 malloc0 00:18:03.648 14:24:09 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.906 14:24:09 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:04.164 [2024-12-05 14:24:09.692267] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:04.164 [2024-12-05 14:24:09.692302] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:04.164 [2024-12-05 14:24:09.692320] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:04.165 2024/12/05 14:24:09 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:04.165 request: 00:18:04.165 { 00:18:04.165 "method": "nvmf_subsystem_add_host", 00:18:04.165 "params": { 00:18:04.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.165 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.165 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:04.165 } 00:18:04.165 } 00:18:04.165 Got JSON-RPC error response 00:18:04.165 GoRPCClient: error on JSON-RPC call 00:18:04.165 14:24:09 -- common/autotest_common.sh@653 -- # es=1 00:18:04.165 14:24:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.165 14:24:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.165 14:24:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.165 14:24:09 -- target/tls.sh@189 -- # killprocess 89411 00:18:04.165 14:24:09 -- common/autotest_common.sh@936 -- # '[' -z 89411 ']' 00:18:04.165 14:24:09 -- common/autotest_common.sh@940 -- # kill -0 89411 00:18:04.165 14:24:09 -- common/autotest_common.sh@941 -- # uname 00:18:04.165 14:24:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.165 14:24:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89411 00:18:04.165 killing process with pid 89411 00:18:04.165 14:24:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.165 14:24:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.165 14:24:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89411' 00:18:04.165 14:24:09 -- common/autotest_common.sh@955 -- # kill 89411 00:18:04.165 14:24:09 -- common/autotest_common.sh@960 -- # wait 89411 00:18:04.424 14:24:10 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:04.424 14:24:10 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:18:04.424 14:24:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.424 14:24:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.424 14:24:10 -- common/autotest_common.sh@10 -- # set +x 00:18:04.424 14:24:10 -- nvmf/common.sh@469 -- # nvmfpid=89523 00:18:04.424 14:24:10 -- nvmf/common.sh@470 -- # waitforlisten 89523 00:18:04.424 14:24:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.424 14:24:10 -- common/autotest_common.sh@829 -- # '[' -z 89523 ']' 00:18:04.424 14:24:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.424 14:24:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.424 14:24:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.424 14:24:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.424 14:24:10 -- common/autotest_common.sh@10 -- # set +x 00:18:04.684 [2024-12-05 14:24:10.084474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:04.684 [2024-12-05 14:24:10.084549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.684 [2024-12-05 14:24:10.214693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.684 [2024-12-05 14:24:10.286753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.684 [2024-12-05 14:24:10.286920] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.684 [2024-12-05 14:24:10.286933] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.684 [2024-12-05 14:24:10.286942] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.684 [2024-12-05 14:24:10.286973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.619 14:24:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.619 14:24:11 -- common/autotest_common.sh@862 -- # return 0 00:18:05.619 14:24:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:05.619 14:24:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.619 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:18:05.619 14:24:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.619 14:24:11 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.619 14:24:11 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.619 14:24:11 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.878 [2024-12-05 14:24:11.375130] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.878 14:24:11 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:06.137 14:24:11 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:06.396 [2024-12-05 14:24:11.807221] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:06.396 [2024-12-05 14:24:11.807421] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.396 14:24:11 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.397 malloc0 00:18:06.397 14:24:12 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.655 14:24:12 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:06.914 14:24:12 -- target/tls.sh@197 -- # bdevperf_pid=89620 00:18:06.914 14:24:12 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.914 14:24:12 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.914 14:24:12 -- target/tls.sh@200 -- # waitforlisten 89620 /var/tmp/bdevperf.sock 00:18:06.914 14:24:12 -- common/autotest_common.sh@829 -- # '[' -z 89620 ']' 00:18:06.914 14:24:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.914 14:24:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.914 14:24:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.914 14:24:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.914 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:18:06.914 [2024-12-05 14:24:12.536747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:06.914 [2024-12-05 14:24:12.536827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89620 ] 00:18:07.173 [2024-12-05 14:24:12.672178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.173 [2024-12-05 14:24:12.752428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.109 14:24:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.109 14:24:13 -- common/autotest_common.sh@862 -- # return 0 00:18:08.109 14:24:13 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:08.109 [2024-12-05 14:24:13.631014] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.109 TLSTESTn1 00:18:08.109 14:24:13 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:08.677 14:24:14 -- target/tls.sh@205 -- # tgtconf='{ 00:18:08.677 "subsystems": [ 00:18:08.677 { 00:18:08.677 "subsystem": "iobuf", 00:18:08.677 "config": [ 00:18:08.677 { 00:18:08.677 "method": "iobuf_set_options", 00:18:08.677 "params": { 00:18:08.677 "large_bufsize": 135168, 00:18:08.677 "large_pool_count": 1024, 00:18:08.677 "small_bufsize": 8192, 00:18:08.677 "small_pool_count": 8192 00:18:08.677 } 00:18:08.677 } 00:18:08.677 ] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "sock", 00:18:08.677 "config": [ 00:18:08.677 { 00:18:08.677 "method": "sock_impl_set_options", 00:18:08.677 "params": { 00:18:08.677 "enable_ktls": false, 00:18:08.677 "enable_placement_id": 0, 00:18:08.677 "enable_quickack": false, 00:18:08.677 "enable_recv_pipe": true, 00:18:08.677 "enable_zerocopy_send_client": false, 00:18:08.677 "enable_zerocopy_send_server": true, 00:18:08.677 "impl_name": "posix", 00:18:08.677 "recv_buf_size": 2097152, 00:18:08.677 "send_buf_size": 2097152, 00:18:08.677 "tls_version": 0, 00:18:08.677 "zerocopy_threshold": 0 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "sock_impl_set_options", 00:18:08.677 "params": { 00:18:08.677 "enable_ktls": false, 00:18:08.677 "enable_placement_id": 0, 00:18:08.677 "enable_quickack": false, 00:18:08.677 "enable_recv_pipe": true, 00:18:08.677 "enable_zerocopy_send_client": false, 00:18:08.677 "enable_zerocopy_send_server": true, 00:18:08.677 "impl_name": "ssl", 00:18:08.677 "recv_buf_size": 4096, 00:18:08.677 "send_buf_size": 4096, 00:18:08.677 "tls_version": 0, 00:18:08.677 "zerocopy_threshold": 0 00:18:08.677 } 00:18:08.677 } 00:18:08.677 ] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "vmd", 00:18:08.677 "config": [] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "accel", 00:18:08.677 "config": [ 00:18:08.677 { 00:18:08.677 "method": "accel_set_options", 00:18:08.677 "params": { 00:18:08.677 "buf_count": 2048, 00:18:08.677 "large_cache_size": 16, 00:18:08.677 "sequence_count": 2048, 00:18:08.677 "small_cache_size": 128, 00:18:08.677 "task_count": 2048 00:18:08.677 } 00:18:08.677 } 00:18:08.677 ] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "bdev", 00:18:08.677 "config": [ 00:18:08.677 { 00:18:08.677 "method": "bdev_set_options", 00:18:08.677 "params": { 00:18:08.677 "bdev_auto_examine": true, 00:18:08.677 "bdev_io_cache_size": 256, 00:18:08.677 "bdev_io_pool_size": 65535, 00:18:08.677 "iobuf_large_cache_size": 16, 00:18:08.677 "iobuf_small_cache_size": 128 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "bdev_raid_set_options", 00:18:08.677 "params": { 00:18:08.677 "process_window_size_kb": 1024 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "bdev_iscsi_set_options", 00:18:08.677 "params": { 00:18:08.677 "timeout_sec": 30 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "bdev_nvme_set_options", 00:18:08.677 "params": { 00:18:08.677 "action_on_timeout": "none", 00:18:08.677 "allow_accel_sequence": false, 00:18:08.677 "arbitration_burst": 0, 00:18:08.677 "bdev_retry_count": 3, 00:18:08.677 "ctrlr_loss_timeout_sec": 0, 00:18:08.677 "delay_cmd_submit": true, 00:18:08.677 "fast_io_fail_timeout_sec": 0, 00:18:08.677 "generate_uuids": false, 00:18:08.677 "high_priority_weight": 0, 00:18:08.677 "io_path_stat": false, 00:18:08.677 "io_queue_requests": 0, 00:18:08.677 "keep_alive_timeout_ms": 10000, 00:18:08.677 "low_priority_weight": 0, 00:18:08.677 "medium_priority_weight": 0, 00:18:08.677 "nvme_adminq_poll_period_us": 10000, 00:18:08.677 "nvme_ioq_poll_period_us": 0, 00:18:08.677 "reconnect_delay_sec": 0, 00:18:08.677 "timeout_admin_us": 0, 00:18:08.677 "timeout_us": 0, 00:18:08.677 "transport_ack_timeout": 0, 00:18:08.677 "transport_retry_count": 4, 00:18:08.677 "transport_tos": 0 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "bdev_nvme_set_hotplug", 00:18:08.677 "params": { 00:18:08.677 "enable": false, 00:18:08.677 "period_us": 100000 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "bdev_malloc_create", 00:18:08.677 "params": { 00:18:08.677 "block_size": 4096, 00:18:08.677 "name": "malloc0", 00:18:08.677 "num_blocks": 8192, 00:18:08.677 "optimal_io_boundary": 0, 00:18:08.677 "physical_block_size": 4096, 00:18:08.677 "uuid": "0b7a227e-6548-4017-8a2e-7357a4e7d29a" 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "bdev_wait_for_examine" 00:18:08.677 } 00:18:08.677 ] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "nbd", 00:18:08.677 "config": [] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "scheduler", 00:18:08.677 "config": [ 00:18:08.677 { 00:18:08.677 "method": "framework_set_scheduler", 00:18:08.677 "params": { 00:18:08.677 "name": "static" 00:18:08.677 } 00:18:08.677 } 00:18:08.677 ] 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "subsystem": "nvmf", 00:18:08.677 "config": [ 00:18:08.677 { 00:18:08.677 "method": "nvmf_set_config", 00:18:08.677 "params": { 00:18:08.677 "admin_cmd_passthru": { 00:18:08.677 "identify_ctrlr": false 00:18:08.677 }, 00:18:08.677 "discovery_filter": "match_any" 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "nvmf_set_max_subsystems", 00:18:08.677 "params": { 00:18:08.677 "max_subsystems": 1024 00:18:08.677 } 00:18:08.677 }, 00:18:08.677 { 00:18:08.677 "method": "nvmf_set_crdt", 00:18:08.677 "params": { 00:18:08.678 "crdt1": 0, 00:18:08.678 "crdt2": 0, 00:18:08.678 "crdt3": 0 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "nvmf_create_transport", 00:18:08.678 "params": { 00:18:08.678 "abort_timeout_sec": 1, 00:18:08.678 "buf_cache_size": 4294967295, 00:18:08.678 "c2h_success": false, 00:18:08.678 "dif_insert_or_strip": false, 00:18:08.678 "in_capsule_data_size": 4096, 00:18:08.678 "io_unit_size": 131072, 00:18:08.678 "max_aq_depth": 128, 00:18:08.678 "max_io_qpairs_per_ctrlr": 127, 00:18:08.678 "max_io_size": 131072, 00:18:08.678 "max_queue_depth": 128, 00:18:08.678 "num_shared_buffers": 511, 00:18:08.678 "sock_priority": 0, 00:18:08.678 "trtype": "TCP", 00:18:08.678 "zcopy": false 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "nvmf_create_subsystem", 00:18:08.678 "params": { 00:18:08.678 "allow_any_host": false, 00:18:08.678 "ana_reporting": false, 00:18:08.678 "max_cntlid": 65519, 00:18:08.678 "max_namespaces": 10, 00:18:08.678 "min_cntlid": 1, 00:18:08.678 "model_number": "SPDK bdev Controller", 00:18:08.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.678 "serial_number": "SPDK00000000000001" 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "nvmf_subsystem_add_host", 00:18:08.678 "params": { 00:18:08.678 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.678 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "nvmf_subsystem_add_ns", 00:18:08.678 "params": { 00:18:08.678 "namespace": { 00:18:08.678 "bdev_name": "malloc0", 00:18:08.678 "nguid": "0B7A227E654840178A2E7357A4E7D29A", 00:18:08.678 "nsid": 1, 00:18:08.678 "uuid": "0b7a227e-6548-4017-8a2e-7357a4e7d29a" 00:18:08.678 }, 00:18:08.678 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "nvmf_subsystem_add_listener", 00:18:08.678 "params": { 00:18:08.678 "listen_address": { 00:18:08.678 "adrfam": "IPv4", 00:18:08.678 "traddr": "10.0.0.2", 00:18:08.678 "trsvcid": "4420", 00:18:08.678 "trtype": "TCP" 00:18:08.678 }, 00:18:08.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.678 "secure_channel": true 00:18:08.678 } 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 }' 00:18:08.678 14:24:14 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:08.678 14:24:14 -- target/tls.sh@206 -- # bdevperfconf='{ 00:18:08.678 "subsystems": [ 00:18:08.678 { 00:18:08.678 "subsystem": "iobuf", 00:18:08.678 "config": [ 00:18:08.678 { 00:18:08.678 "method": "iobuf_set_options", 00:18:08.678 "params": { 00:18:08.678 "large_bufsize": 135168, 00:18:08.678 "large_pool_count": 1024, 00:18:08.678 "small_bufsize": 8192, 00:18:08.678 "small_pool_count": 8192 00:18:08.678 } 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "subsystem": "sock", 00:18:08.678 "config": [ 00:18:08.678 { 00:18:08.678 "method": "sock_impl_set_options", 00:18:08.678 "params": { 00:18:08.678 "enable_ktls": false, 00:18:08.678 "enable_placement_id": 0, 00:18:08.678 "enable_quickack": false, 00:18:08.678 "enable_recv_pipe": true, 00:18:08.678 "enable_zerocopy_send_client": false, 00:18:08.678 "enable_zerocopy_send_server": true, 00:18:08.678 "impl_name": "posix", 00:18:08.678 "recv_buf_size": 2097152, 00:18:08.678 "send_buf_size": 2097152, 00:18:08.678 "tls_version": 0, 00:18:08.678 "zerocopy_threshold": 0 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "sock_impl_set_options", 00:18:08.678 "params": { 00:18:08.678 "enable_ktls": false, 00:18:08.678 "enable_placement_id": 0, 00:18:08.678 "enable_quickack": false, 00:18:08.678 "enable_recv_pipe": true, 00:18:08.678 "enable_zerocopy_send_client": false, 00:18:08.678 "enable_zerocopy_send_server": true, 00:18:08.678 "impl_name": "ssl", 00:18:08.678 "recv_buf_size": 4096, 00:18:08.678 "send_buf_size": 4096, 00:18:08.678 "tls_version": 0, 00:18:08.678 "zerocopy_threshold": 0 00:18:08.678 } 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "subsystem": "vmd", 00:18:08.678 "config": [] 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "subsystem": "accel", 00:18:08.678 "config": [ 00:18:08.678 { 00:18:08.678 "method": "accel_set_options", 00:18:08.678 "params": { 00:18:08.678 "buf_count": 2048, 00:18:08.678 "large_cache_size": 16, 00:18:08.678 "sequence_count": 2048, 00:18:08.678 "small_cache_size": 128, 00:18:08.678 "task_count": 2048 00:18:08.678 } 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "subsystem": "bdev", 00:18:08.678 "config": [ 00:18:08.678 { 00:18:08.678 "method": "bdev_set_options", 00:18:08.678 "params": { 00:18:08.678 "bdev_auto_examine": true, 00:18:08.678 "bdev_io_cache_size": 256, 00:18:08.678 "bdev_io_pool_size": 65535, 00:18:08.678 "iobuf_large_cache_size": 16, 00:18:08.678 "iobuf_small_cache_size": 128 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_raid_set_options", 00:18:08.678 "params": { 00:18:08.678 "process_window_size_kb": 1024 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_iscsi_set_options", 00:18:08.678 "params": { 00:18:08.678 "timeout_sec": 30 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_nvme_set_options", 00:18:08.678 "params": { 00:18:08.678 "action_on_timeout": "none", 00:18:08.678 "allow_accel_sequence": false, 00:18:08.678 "arbitration_burst": 0, 00:18:08.678 "bdev_retry_count": 3, 00:18:08.678 "ctrlr_loss_timeout_sec": 0, 00:18:08.678 "delay_cmd_submit": true, 00:18:08.678 "fast_io_fail_timeout_sec": 0, 00:18:08.678 "generate_uuids": false, 00:18:08.678 "high_priority_weight": 0, 00:18:08.678 "io_path_stat": false, 00:18:08.678 "io_queue_requests": 512, 00:18:08.678 "keep_alive_timeout_ms": 10000, 00:18:08.678 "low_priority_weight": 0, 00:18:08.678 "medium_priority_weight": 0, 00:18:08.678 "nvme_adminq_poll_period_us": 10000, 00:18:08.678 "nvme_ioq_poll_period_us": 0, 00:18:08.678 "reconnect_delay_sec": 0, 00:18:08.678 "timeout_admin_us": 0, 00:18:08.678 "timeout_us": 0, 00:18:08.678 "transport_ack_timeout": 0, 00:18:08.678 "transport_retry_count": 4, 00:18:08.678 "transport_tos": 0 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_nvme_attach_controller", 00:18:08.678 "params": { 00:18:08.678 "adrfam": "IPv4", 00:18:08.678 "ctrlr_loss_timeout_sec": 0, 00:18:08.678 "ddgst": false, 00:18:08.678 "fast_io_fail_timeout_sec": 0, 00:18:08.678 "hdgst": false, 00:18:08.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.678 "name": "TLSTEST", 00:18:08.678 "prchk_guard": false, 00:18:08.678 "prchk_reftag": false, 00:18:08.678 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:08.678 "reconnect_delay_sec": 0, 00:18:08.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.678 "traddr": "10.0.0.2", 00:18:08.678 "trsvcid": "4420", 00:18:08.678 "trtype": "TCP" 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_nvme_set_hotplug", 00:18:08.678 "params": { 00:18:08.678 "enable": false, 00:18:08.678 "period_us": 100000 00:18:08.678 } 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_wait_for_examine" 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "subsystem": "nbd", 00:18:08.678 "config": [] 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 }' 00:18:08.678 14:24:14 -- target/tls.sh@208 -- # killprocess 89620 00:18:08.679 14:24:14 -- common/autotest_common.sh@936 -- # '[' -z 89620 ']' 00:18:08.679 14:24:14 -- common/autotest_common.sh@940 -- # kill -0 89620 00:18:08.679 14:24:14 -- common/autotest_common.sh@941 -- # uname 00:18:08.679 14:24:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.679 14:24:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89620 00:18:08.938 14:24:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:08.938 killing process with pid 89620 00:18:08.938 14:24:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:08.938 14:24:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89620' 00:18:08.938 14:24:14 -- common/autotest_common.sh@955 -- # kill 89620 00:18:08.938 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.938 00:18:08.938 Latency(us) 00:18:08.938 [2024-12-05T14:24:14.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.938 [2024-12-05T14:24:14.586Z] =================================================================================================================== 00:18:08.938 [2024-12-05T14:24:14.586Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.938 14:24:14 -- common/autotest_common.sh@960 -- # wait 89620 00:18:09.197 14:24:14 -- target/tls.sh@209 -- # killprocess 89523 00:18:09.197 14:24:14 -- common/autotest_common.sh@936 -- # '[' -z 89523 ']' 00:18:09.197 14:24:14 -- common/autotest_common.sh@940 -- # kill -0 89523 00:18:09.197 14:24:14 -- common/autotest_common.sh@941 -- # uname 00:18:09.197 14:24:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.197 14:24:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89523 00:18:09.197 killing process with pid 89523 00:18:09.197 14:24:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:09.197 14:24:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:09.197 14:24:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89523' 00:18:09.197 14:24:14 -- common/autotest_common.sh@955 -- # kill 89523 00:18:09.197 14:24:14 -- common/autotest_common.sh@960 -- # wait 89523 00:18:09.458 14:24:14 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:09.458 14:24:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.458 14:24:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.458 14:24:14 -- target/tls.sh@212 -- # echo '{ 00:18:09.458 "subsystems": [ 00:18:09.458 { 00:18:09.458 "subsystem": "iobuf", 00:18:09.458 "config": [ 00:18:09.458 { 00:18:09.458 "method": "iobuf_set_options", 00:18:09.458 "params": { 00:18:09.458 "large_bufsize": 135168, 00:18:09.458 "large_pool_count": 1024, 00:18:09.458 "small_bufsize": 8192, 00:18:09.458 "small_pool_count": 8192 00:18:09.458 } 00:18:09.458 } 00:18:09.458 ] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "sock", 00:18:09.458 "config": [ 00:18:09.458 { 00:18:09.458 "method": "sock_impl_set_options", 00:18:09.458 "params": { 00:18:09.458 "enable_ktls": false, 00:18:09.458 "enable_placement_id": 0, 00:18:09.458 "enable_quickack": false, 00:18:09.458 "enable_recv_pipe": true, 00:18:09.458 "enable_zerocopy_send_client": false, 00:18:09.458 "enable_zerocopy_send_server": true, 00:18:09.458 "impl_name": "posix", 00:18:09.458 "recv_buf_size": 2097152, 00:18:09.458 "send_buf_size": 2097152, 00:18:09.458 "tls_version": 0, 00:18:09.458 "zerocopy_threshold": 0 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "sock_impl_set_options", 00:18:09.458 "params": { 00:18:09.458 "enable_ktls": false, 00:18:09.458 "enable_placement_id": 0, 00:18:09.458 "enable_quickack": false, 00:18:09.458 "enable_recv_pipe": true, 00:18:09.458 "enable_zerocopy_send_client": false, 00:18:09.458 "enable_zerocopy_send_server": true, 00:18:09.458 "impl_name": "ssl", 00:18:09.458 "recv_buf_size": 4096, 00:18:09.458 "send_buf_size": 4096, 00:18:09.458 "tls_version": 0, 00:18:09.458 "zerocopy_threshold": 0 00:18:09.458 } 00:18:09.458 } 00:18:09.458 ] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "vmd", 00:18:09.458 "config": [] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "accel", 00:18:09.458 "config": [ 00:18:09.458 { 00:18:09.458 "method": "accel_set_options", 00:18:09.458 "params": { 00:18:09.458 "buf_count": 2048, 00:18:09.458 "large_cache_size": 16, 00:18:09.458 "sequence_count": 2048, 00:18:09.458 "small_cache_size": 128, 00:18:09.458 "task_count": 2048 00:18:09.458 } 00:18:09.458 } 00:18:09.458 ] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "bdev", 00:18:09.458 "config": [ 00:18:09.458 { 00:18:09.458 "method": "bdev_set_options", 00:18:09.458 "params": { 00:18:09.458 "bdev_auto_examine": true, 00:18:09.458 "bdev_io_cache_size": 256, 00:18:09.458 "bdev_io_pool_size": 65535, 00:18:09.458 "iobuf_large_cache_size": 16, 00:18:09.458 "iobuf_small_cache_size": 128 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "bdev_raid_set_options", 00:18:09.458 "params": { 00:18:09.458 "process_window_size_kb": 1024 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "bdev_iscsi_set_options", 00:18:09.458 "params": { 00:18:09.458 "timeout_sec": 30 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "bdev_nvme_set_options", 00:18:09.458 "params": { 00:18:09.458 "action_on_timeout": "none", 00:18:09.458 "allow_accel_sequence": false, 00:18:09.458 "arbitration_burst": 0, 00:18:09.458 "bdev_retry_count": 3, 00:18:09.458 "ctrlr_loss_timeout_sec": 0, 00:18:09.458 "delay_cmd_submit": true, 00:18:09.458 "fast_io_fail_timeout_sec": 0, 00:18:09.458 "generate_uuids": false, 00:18:09.458 "high_priority_weight": 0, 00:18:09.458 "io_path_stat": false, 00:18:09.458 "io_queue_requests": 0, 00:18:09.458 "keep_alive_timeout_ms": 10000, 00:18:09.458 "low_priority_weight": 0, 00:18:09.458 "medium_priority_weight": 0, 00:18:09.458 "nvme_adminq_poll_period_us": 10000, 00:18:09.458 "nvme_ioq_poll_period_us": 0, 00:18:09.458 "reconnect_delay_sec": 0, 00:18:09.458 "timeout_admin_us": 0, 00:18:09.458 "timeout_us": 0, 00:18:09.458 "transport_ack_timeout": 0, 00:18:09.458 "transport_retry_count": 4, 00:18:09.458 "transport_tos": 0 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "bdev_nvme_set_hotplug", 00:18:09.458 "params": { 00:18:09.458 "enable": false, 00:18:09.458 "period_us": 100000 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "bdev_malloc_create", 00:18:09.458 "params": { 00:18:09.458 "block_size": 4096, 00:18:09.458 "name": "malloc0", 00:18:09.458 "num_blocks": 8192, 00:18:09.458 "optimal_io_boundary": 0, 00:18:09.458 "physical_block_size": 4096, 00:18:09.458 "uuid": "0b7a227e-6548-4017-8a2e-7357a4e7d29a" 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "bdev_wait_for_examine" 00:18:09.458 } 00:18:09.458 ] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "nbd", 00:18:09.458 "config": [] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "scheduler", 00:18:09.458 "config": [ 00:18:09.458 { 00:18:09.458 "method": "framework_set_scheduler", 00:18:09.458 "params": { 00:18:09.458 "name": "static" 00:18:09.458 } 00:18:09.458 } 00:18:09.458 ] 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "subsystem": "nvmf", 00:18:09.458 "config": [ 00:18:09.458 { 00:18:09.458 "method": "nvmf_set_config", 00:18:09.458 "params": { 00:18:09.458 "admin_cmd_passthru": { 00:18:09.458 "identify_ctrlr": false 00:18:09.458 }, 00:18:09.458 "discovery_filter": "match_any" 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "nvmf_set_max_subsystems", 00:18:09.458 "params": { 00:18:09.458 "max_subsystems": 1024 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.458 "method": "nvmf_set_crdt", 00:18:09.458 "params": { 00:18:09.458 "crdt1": 0, 00:18:09.458 "crdt2": 0, 00:18:09.458 "crdt3": 0 00:18:09.458 } 00:18:09.458 }, 00:18:09.458 { 00:18:09.459 "method": "nvmf_create_transport", 00:18:09.459 "params": { 00:18:09.459 "abort_timeout_sec": 1, 00:18:09.459 "buf_cache_size": 4294967295, 00:18:09.459 "c2h_success": false, 00:18:09.459 "dif_insert_or_strip": false, 00:18:09.459 "in_capsule_data_size": 4096, 00:18:09.459 "io_unit_size": 131072, 00:18:09.459 "max_aq_depth": 128, 00:18:09.459 "max_io_qpairs_per_ctrlr": 127, 00:18:09.459 "max_io_size": 131072, 00:18:09.459 "max_queue_depth": 128, 00:18:09.459 "num_shared_buffers": 511, 00:18:09.459 "sock_priority": 0, 00:18:09.459 "trtype": "TCP", 00:18:09.459 "zcopy": false 00:18:09.459 } 00:18:09.459 }, 00:18:09.459 { 00:18:09.459 "method": "nvmf_create_subsystem", 00:18:09.459 "params": { 00:18:09.459 "allow_any_host": false, 00:18:09.459 "ana_reporting": false, 00:18:09.459 "max_cntlid": 65519, 00:18:09.459 "max_namespaces": 10, 00:18:09.459 "min_cntlid": 1, 00:18:09.459 "model_number": "SPDK bdev Controller", 00:18:09.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.459 "serial_number": "SPDK00000000000001" 00:18:09.459 } 00:18:09.459 }, 00:18:09.459 { 00:18:09.459 "method": "nvmf_subsystem_add_host", 00:18:09.459 "params": { 00:18:09.459 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.459 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:09.459 } 00:18:09.459 }, 00:18:09.459 { 00:18:09.459 "method": "nvmf_subsystem_add_ns", 00:18:09.459 "params": { 00:18:09.459 "namespace": { 00:18:09.459 "bdev_name": "malloc0", 00:18:09.459 "nguid": "0B7A227E654840178A2E7357A4E7D29A", 00:18:09.459 "nsid": 1, 00:18:09.459 "uuid": "0b7a227e-6548-4017-8a2e-7357a4e7d29a" 00:18:09.459 }, 00:18:09.459 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:09.459 } 00:18:09.459 }, 00:18:09.459 { 00:18:09.459 "method": "nvmf_subsystem_add_listener", 00:18:09.459 "params": { 00:18:09.459 "listen_address": { 00:18:09.459 "adrfam": "IPv4", 00:18:09.459 "traddr": "10.0.0.2", 00:18:09.459 "trsvcid": "4420", 00:18:09.459 "trtype": "TCP" 00:18:09.459 }, 00:18:09.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.459 "secure_channel": true 00:18:09.459 } 00:18:09.459 } 00:18:09.459 ] 00:18:09.459 } 00:18:09.459 ] 00:18:09.459 }' 00:18:09.459 14:24:14 -- common/autotest_common.sh@10 -- # set +x 00:18:09.459 14:24:14 -- nvmf/common.sh@469 -- # nvmfpid=89699 00:18:09.459 14:24:14 -- nvmf/common.sh@470 -- # waitforlisten 89699 00:18:09.459 14:24:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:09.459 14:24:14 -- common/autotest_common.sh@829 -- # '[' -z 89699 ']' 00:18:09.459 14:24:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.459 14:24:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.459 14:24:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.459 14:24:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.459 14:24:14 -- common/autotest_common.sh@10 -- # set +x 00:18:09.459 [2024-12-05 14:24:14.976397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:09.459 [2024-12-05 14:24:14.976493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.717 [2024-12-05 14:24:15.115849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.717 [2024-12-05 14:24:15.200979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:09.717 [2024-12-05 14:24:15.201134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.717 [2024-12-05 14:24:15.201147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.717 [2024-12-05 14:24:15.201155] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.717 [2024-12-05 14:24:15.201179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.974 [2024-12-05 14:24:15.452670] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.974 [2024-12-05 14:24:15.484662] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:09.974 [2024-12-05 14:24:15.484937] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.540 14:24:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.540 14:24:15 -- common/autotest_common.sh@862 -- # return 0 00:18:10.540 14:24:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.540 14:24:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.540 14:24:15 -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 14:24:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.540 14:24:15 -- target/tls.sh@216 -- # bdevperf_pid=89743 00:18:10.540 14:24:15 -- target/tls.sh@217 -- # waitforlisten 89743 /var/tmp/bdevperf.sock 00:18:10.540 14:24:15 -- common/autotest_common.sh@829 -- # '[' -z 89743 ']' 00:18:10.540 14:24:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.540 14:24:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.540 14:24:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.540 14:24:15 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:10.540 14:24:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.540 14:24:15 -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 14:24:15 -- target/tls.sh@213 -- # echo '{ 00:18:10.540 "subsystems": [ 00:18:10.540 { 00:18:10.540 "subsystem": "iobuf", 00:18:10.540 "config": [ 00:18:10.540 { 00:18:10.540 "method": "iobuf_set_options", 00:18:10.540 "params": { 00:18:10.540 "large_bufsize": 135168, 00:18:10.540 "large_pool_count": 1024, 00:18:10.540 "small_bufsize": 8192, 00:18:10.540 "small_pool_count": 8192 00:18:10.540 } 00:18:10.541 } 00:18:10.541 ] 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "subsystem": "sock", 00:18:10.541 "config": [ 00:18:10.541 { 00:18:10.541 "method": "sock_impl_set_options", 00:18:10.541 "params": { 00:18:10.541 "enable_ktls": false, 00:18:10.541 "enable_placement_id": 0, 00:18:10.541 "enable_quickack": false, 00:18:10.541 "enable_recv_pipe": true, 00:18:10.541 "enable_zerocopy_send_client": false, 00:18:10.541 "enable_zerocopy_send_server": true, 00:18:10.541 "impl_name": "posix", 00:18:10.541 "recv_buf_size": 2097152, 00:18:10.541 "send_buf_size": 2097152, 00:18:10.541 "tls_version": 0, 00:18:10.541 "zerocopy_threshold": 0 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "sock_impl_set_options", 00:18:10.541 "params": { 00:18:10.541 "enable_ktls": false, 00:18:10.541 "enable_placement_id": 0, 00:18:10.541 "enable_quickack": false, 00:18:10.541 "enable_recv_pipe": true, 00:18:10.541 "enable_zerocopy_send_client": false, 00:18:10.541 "enable_zerocopy_send_server": true, 00:18:10.541 "impl_name": "ssl", 00:18:10.541 "recv_buf_size": 4096, 00:18:10.541 "send_buf_size": 4096, 00:18:10.541 "tls_version": 0, 00:18:10.541 "zerocopy_threshold": 0 00:18:10.541 } 00:18:10.541 } 00:18:10.541 ] 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "subsystem": "vmd", 00:18:10.541 "config": [] 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "subsystem": "accel", 00:18:10.541 "config": [ 00:18:10.541 { 00:18:10.541 "method": "accel_set_options", 00:18:10.541 "params": { 00:18:10.541 "buf_count": 2048, 00:18:10.541 "large_cache_size": 16, 00:18:10.541 "sequence_count": 2048, 00:18:10.541 "small_cache_size": 128, 00:18:10.541 "task_count": 2048 00:18:10.541 } 00:18:10.541 } 00:18:10.541 ] 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "subsystem": "bdev", 00:18:10.541 "config": [ 00:18:10.541 { 00:18:10.541 "method": "bdev_set_options", 00:18:10.541 "params": { 00:18:10.541 "bdev_auto_examine": true, 00:18:10.541 "bdev_io_cache_size": 256, 00:18:10.541 "bdev_io_pool_size": 65535, 00:18:10.541 "iobuf_large_cache_size": 16, 00:18:10.541 "iobuf_small_cache_size": 128 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "bdev_raid_set_options", 00:18:10.541 "params": { 00:18:10.541 "process_window_size_kb": 1024 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "bdev_iscsi_set_options", 00:18:10.541 "params": { 00:18:10.541 "timeout_sec": 30 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "bdev_nvme_set_options", 00:18:10.541 "params": { 00:18:10.541 "action_on_timeout": "none", 00:18:10.541 "allow_accel_sequence": false, 00:18:10.541 "arbitration_burst": 0, 00:18:10.541 "bdev_retry_count": 3, 00:18:10.541 "ctrlr_loss_timeout_sec": 0, 00:18:10.541 "delay_cmd_submit": true, 00:18:10.541 "fast_io_fail_timeout_sec": 0, 00:18:10.541 "generate_uuids": false, 00:18:10.541 "high_priority_weight": 0, 00:18:10.541 "io_path_stat": false, 00:18:10.541 "io_queue_requests": 512, 00:18:10.541 "keep_alive_timeout_ms": 10000, 00:18:10.541 "low_priority_weight": 0, 00:18:10.541 "medium_priority_weight": 0, 00:18:10.541 "nvme_adminq_poll_period_us": 10000, 00:18:10.541 "nvme_ioq_poll_period_us": 0, 00:18:10.541 "reconnect_delay_sec": 0, 00:18:10.541 "timeout_admin_us": 0, 00:18:10.541 "timeout_us": 0, 00:18:10.541 "transport_ack_timeout": 0, 00:18:10.541 "transport_retry_count": 4, 00:18:10.541 "transport_tos": 0 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "bdev_nvme_attach_controller", 00:18:10.541 "params": { 00:18:10.541 "adrfam": "IPv4", 00:18:10.541 "ctrlr_loss_timeout_sec": 0, 00:18:10.541 "ddgst": false, 00:18:10.541 "fast_io_fail_timeout_sec": 0, 00:18:10.541 "hdgst": false, 00:18:10.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.541 "name": "TLSTEST", 00:18:10.541 "prchk_guard": false, 00:18:10.541 "prchk_reftag": false, 00:18:10.541 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:10.541 "reconnect_delay_sec": 0, 00:18:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.541 "traddr": "10.0.0.2", 00:18:10.541 "trsvcid": "4420", 00:18:10.541 "trtype": "TCP" 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "bdev_nvme_set_hotplug", 00:18:10.541 "params": { 00:18:10.541 "enable": false, 00:18:10.541 "period_us": 100000 00:18:10.541 } 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "method": "bdev_wait_for_examine" 00:18:10.541 } 00:18:10.541 ] 00:18:10.541 }, 00:18:10.541 { 00:18:10.541 "subsystem": "nbd", 00:18:10.541 "config": [] 00:18:10.541 } 00:18:10.541 ] 00:18:10.541 }' 00:18:10.541 [2024-12-05 14:24:16.034760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:10.541 [2024-12-05 14:24:16.034856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89743 ] 00:18:10.541 [2024-12-05 14:24:16.166992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.800 [2024-12-05 14:24:16.252317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.800 [2024-12-05 14:24:16.442567] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.737 14:24:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.737 14:24:17 -- common/autotest_common.sh@862 -- # return 0 00:18:11.737 14:24:17 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:11.737 Running I/O for 10 seconds... 00:18:21.715 00:18:21.715 Latency(us) 00:18:21.715 [2024-12-05T14:24:27.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.715 [2024-12-05T14:24:27.363Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.715 Verification LBA range: start 0x0 length 0x2000 00:18:21.715 TLSTESTn1 : 10.01 5930.09 23.16 0.00 0.00 21560.09 2621.44 24427.05 00:18:21.715 [2024-12-05T14:24:27.363Z] =================================================================================================================== 00:18:21.715 [2024-12-05T14:24:27.363Z] Total : 5930.09 23.16 0.00 0.00 21560.09 2621.44 24427.05 00:18:21.715 0 00:18:21.715 14:24:27 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.715 14:24:27 -- target/tls.sh@223 -- # killprocess 89743 00:18:21.715 14:24:27 -- common/autotest_common.sh@936 -- # '[' -z 89743 ']' 00:18:21.715 14:24:27 -- common/autotest_common.sh@940 -- # kill -0 89743 00:18:21.715 14:24:27 -- common/autotest_common.sh@941 -- # uname 00:18:21.715 14:24:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.715 14:24:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89743 00:18:21.715 14:24:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:21.715 14:24:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:21.715 killing process with pid 89743 00:18:21.715 14:24:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89743' 00:18:21.715 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.715 00:18:21.715 Latency(us) 00:18:21.715 [2024-12-05T14:24:27.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.715 [2024-12-05T14:24:27.363Z] =================================================================================================================== 00:18:21.715 [2024-12-05T14:24:27.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.715 14:24:27 -- common/autotest_common.sh@955 -- # kill 89743 00:18:21.715 14:24:27 -- common/autotest_common.sh@960 -- # wait 89743 00:18:21.975 14:24:27 -- target/tls.sh@224 -- # killprocess 89699 00:18:21.975 14:24:27 -- common/autotest_common.sh@936 -- # '[' -z 89699 ']' 00:18:21.975 14:24:27 -- common/autotest_common.sh@940 -- # kill -0 89699 00:18:21.975 14:24:27 -- common/autotest_common.sh@941 -- # uname 00:18:21.975 14:24:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.975 14:24:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89699 00:18:21.975 14:24:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:21.975 14:24:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:21.975 killing process with pid 89699 00:18:21.975 14:24:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89699' 00:18:21.975 14:24:27 -- common/autotest_common.sh@955 -- # kill 89699 00:18:21.975 14:24:27 -- common/autotest_common.sh@960 -- # wait 89699 00:18:22.233 14:24:27 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:22.233 14:24:27 -- target/tls.sh@227 -- # cleanup 00:18:22.233 14:24:27 -- target/tls.sh@15 -- # process_shm --id 0 00:18:22.233 14:24:27 -- common/autotest_common.sh@806 -- # type=--id 00:18:22.233 14:24:27 -- common/autotest_common.sh@807 -- # id=0 00:18:22.233 14:24:27 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:22.233 14:24:27 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:22.233 14:24:27 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:22.233 14:24:27 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:22.233 14:24:27 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:22.233 14:24:27 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:22.233 nvmf_trace.0 00:18:22.233 14:24:27 -- common/autotest_common.sh@821 -- # return 0 00:18:22.233 14:24:27 -- target/tls.sh@16 -- # killprocess 89743 00:18:22.233 14:24:27 -- common/autotest_common.sh@936 -- # '[' -z 89743 ']' 00:18:22.233 14:24:27 -- common/autotest_common.sh@940 -- # kill -0 89743 00:18:22.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89743) - No such process 00:18:22.233 Process with pid 89743 is not found 00:18:22.233 14:24:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89743 is not found' 00:18:22.234 14:24:27 -- target/tls.sh@17 -- # nvmftestfini 00:18:22.234 14:24:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.234 14:24:27 -- nvmf/common.sh@116 -- # sync 00:18:22.234 14:24:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.234 14:24:27 -- nvmf/common.sh@119 -- # set +e 00:18:22.234 14:24:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.234 14:24:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.234 rmmod nvme_tcp 00:18:22.234 rmmod nvme_fabrics 00:18:22.491 rmmod nvme_keyring 00:18:22.491 14:24:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.491 14:24:27 -- nvmf/common.sh@123 -- # set -e 00:18:22.491 14:24:27 -- nvmf/common.sh@124 -- # return 0 00:18:22.491 14:24:27 -- nvmf/common.sh@477 -- # '[' -n 89699 ']' 00:18:22.491 14:24:27 -- nvmf/common.sh@478 -- # killprocess 89699 00:18:22.491 14:24:27 -- common/autotest_common.sh@936 -- # '[' -z 89699 ']' 00:18:22.491 14:24:27 -- common/autotest_common.sh@940 -- # kill -0 89699 00:18:22.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89699) - No such process 00:18:22.491 Process with pid 89699 is not found 00:18:22.492 14:24:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89699 is not found' 00:18:22.492 14:24:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.492 14:24:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.492 14:24:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.492 14:24:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.492 14:24:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.492 14:24:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.492 14:24:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.492 14:24:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.492 14:24:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:22.492 14:24:27 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:22.492 00:18:22.492 real 1m11.071s 00:18:22.492 user 1m45.107s 00:18:22.492 sys 0m27.504s 00:18:22.492 14:24:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:22.492 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.492 ************************************ 00:18:22.492 END TEST nvmf_tls 00:18:22.492 ************************************ 00:18:22.492 14:24:27 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:22.492 14:24:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.492 14:24:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.492 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:18:22.492 ************************************ 00:18:22.492 START TEST nvmf_fips 00:18:22.492 ************************************ 00:18:22.492 14:24:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:22.492 * Looking for test storage... 00:18:22.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:22.492 14:24:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:22.492 14:24:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:22.492 14:24:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:22.750 14:24:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:22.751 14:24:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:22.751 14:24:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:22.751 14:24:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:22.751 14:24:28 -- scripts/common.sh@335 -- # IFS=.-: 00:18:22.751 14:24:28 -- scripts/common.sh@335 -- # read -ra ver1 00:18:22.751 14:24:28 -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.751 14:24:28 -- scripts/common.sh@336 -- # read -ra ver2 00:18:22.751 14:24:28 -- scripts/common.sh@337 -- # local 'op=<' 00:18:22.751 14:24:28 -- scripts/common.sh@339 -- # ver1_l=2 00:18:22.751 14:24:28 -- scripts/common.sh@340 -- # ver2_l=1 00:18:22.751 14:24:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:22.751 14:24:28 -- scripts/common.sh@343 -- # case "$op" in 00:18:22.751 14:24:28 -- scripts/common.sh@344 -- # : 1 00:18:22.751 14:24:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:22.751 14:24:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.751 14:24:28 -- scripts/common.sh@364 -- # decimal 1 00:18:22.751 14:24:28 -- scripts/common.sh@352 -- # local d=1 00:18:22.751 14:24:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.751 14:24:28 -- scripts/common.sh@354 -- # echo 1 00:18:22.751 14:24:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:22.751 14:24:28 -- scripts/common.sh@365 -- # decimal 2 00:18:22.751 14:24:28 -- scripts/common.sh@352 -- # local d=2 00:18:22.751 14:24:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.751 14:24:28 -- scripts/common.sh@354 -- # echo 2 00:18:22.751 14:24:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:22.751 14:24:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.751 14:24:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:22.751 14:24:28 -- scripts/common.sh@367 -- # return 0 00:18:22.751 14:24:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.751 14:24:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.751 --rc genhtml_branch_coverage=1 00:18:22.751 --rc genhtml_function_coverage=1 00:18:22.751 --rc genhtml_legend=1 00:18:22.751 --rc geninfo_all_blocks=1 00:18:22.751 --rc geninfo_unexecuted_blocks=1 00:18:22.751 00:18:22.751 ' 00:18:22.751 14:24:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.751 --rc genhtml_branch_coverage=1 00:18:22.751 --rc genhtml_function_coverage=1 00:18:22.751 --rc genhtml_legend=1 00:18:22.751 --rc geninfo_all_blocks=1 00:18:22.751 --rc geninfo_unexecuted_blocks=1 00:18:22.751 00:18:22.751 ' 00:18:22.751 14:24:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.751 --rc genhtml_branch_coverage=1 00:18:22.751 --rc genhtml_function_coverage=1 00:18:22.751 --rc genhtml_legend=1 00:18:22.751 --rc geninfo_all_blocks=1 00:18:22.751 --rc geninfo_unexecuted_blocks=1 00:18:22.751 00:18:22.751 ' 00:18:22.751 14:24:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.751 --rc genhtml_branch_coverage=1 00:18:22.751 --rc genhtml_function_coverage=1 00:18:22.751 --rc genhtml_legend=1 00:18:22.751 --rc geninfo_all_blocks=1 00:18:22.751 --rc geninfo_unexecuted_blocks=1 00:18:22.751 00:18:22.751 ' 00:18:22.751 14:24:28 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.751 14:24:28 -- nvmf/common.sh@7 -- # uname -s 00:18:22.751 14:24:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.751 14:24:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.751 14:24:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.751 14:24:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.751 14:24:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.751 14:24:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.751 14:24:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.751 14:24:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.751 14:24:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.751 14:24:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.751 14:24:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:18:22.751 14:24:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:18:22.751 14:24:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.751 14:24:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.751 14:24:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.751 14:24:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.751 14:24:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.751 14:24:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.751 14:24:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.751 14:24:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.751 14:24:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.751 14:24:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.751 14:24:28 -- paths/export.sh@5 -- # export PATH 00:18:22.751 14:24:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.751 14:24:28 -- nvmf/common.sh@46 -- # : 0 00:18:22.751 14:24:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.751 14:24:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.751 14:24:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.751 14:24:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.751 14:24:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.751 14:24:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.751 14:24:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.751 14:24:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.751 14:24:28 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.751 14:24:28 -- fips/fips.sh@89 -- # check_openssl_version 00:18:22.751 14:24:28 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:22.751 14:24:28 -- fips/fips.sh@85 -- # openssl version 00:18:22.751 14:24:28 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:22.751 14:24:28 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:22.751 14:24:28 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:22.751 14:24:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:22.751 14:24:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:22.751 14:24:28 -- scripts/common.sh@335 -- # IFS=.-: 00:18:22.751 14:24:28 -- scripts/common.sh@335 -- # read -ra ver1 00:18:22.751 14:24:28 -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.751 14:24:28 -- scripts/common.sh@336 -- # read -ra ver2 00:18:22.751 14:24:28 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:22.751 14:24:28 -- scripts/common.sh@339 -- # ver1_l=3 00:18:22.751 14:24:28 -- scripts/common.sh@340 -- # ver2_l=3 00:18:22.751 14:24:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:22.751 14:24:28 -- scripts/common.sh@343 -- # case "$op" in 00:18:22.751 14:24:28 -- scripts/common.sh@347 -- # : 1 00:18:22.751 14:24:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:22.751 14:24:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.751 14:24:28 -- scripts/common.sh@364 -- # decimal 3 00:18:22.751 14:24:28 -- scripts/common.sh@352 -- # local d=3 00:18:22.751 14:24:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:22.751 14:24:28 -- scripts/common.sh@354 -- # echo 3 00:18:22.751 14:24:28 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:22.751 14:24:28 -- scripts/common.sh@365 -- # decimal 3 00:18:22.751 14:24:28 -- scripts/common.sh@352 -- # local d=3 00:18:22.751 14:24:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:22.751 14:24:28 -- scripts/common.sh@354 -- # echo 3 00:18:22.751 14:24:28 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:22.751 14:24:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.751 14:24:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:22.751 14:24:28 -- scripts/common.sh@363 -- # (( v++ )) 00:18:22.751 14:24:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.751 14:24:28 -- scripts/common.sh@364 -- # decimal 1 00:18:22.751 14:24:28 -- scripts/common.sh@352 -- # local d=1 00:18:22.751 14:24:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.751 14:24:28 -- scripts/common.sh@354 -- # echo 1 00:18:22.751 14:24:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:22.751 14:24:28 -- scripts/common.sh@365 -- # decimal 0 00:18:22.751 14:24:28 -- scripts/common.sh@352 -- # local d=0 00:18:22.751 14:24:28 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:22.751 14:24:28 -- scripts/common.sh@354 -- # echo 0 00:18:22.751 14:24:28 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:22.751 14:24:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.752 14:24:28 -- scripts/common.sh@366 -- # return 0 00:18:22.752 14:24:28 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:22.752 14:24:28 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:22.752 14:24:28 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:22.752 14:24:28 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:22.752 14:24:28 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:22.752 14:24:28 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:22.752 14:24:28 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:22.752 14:24:28 -- fips/fips.sh@113 -- # build_openssl_config 00:18:22.752 14:24:28 -- fips/fips.sh@37 -- # cat 00:18:22.752 14:24:28 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:22.752 14:24:28 -- fips/fips.sh@58 -- # cat - 00:18:22.752 14:24:28 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:22.752 14:24:28 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:22.752 14:24:28 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:22.752 14:24:28 -- fips/fips.sh@116 -- # grep name 00:18:22.752 14:24:28 -- fips/fips.sh@116 -- # openssl list -providers 00:18:22.752 14:24:28 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:22.752 14:24:28 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:22.752 14:24:28 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:22.752 14:24:28 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:22.752 14:24:28 -- common/autotest_common.sh@650 -- # local es=0 00:18:22.752 14:24:28 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:22.752 14:24:28 -- fips/fips.sh@127 -- # : 00:18:22.752 14:24:28 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:22.752 14:24:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.752 14:24:28 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:22.752 14:24:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.752 14:24:28 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:22.752 14:24:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.752 14:24:28 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:22.752 14:24:28 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:22.752 14:24:28 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:22.752 Error setting digest 00:18:22.752 4022BD1C867F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:22.752 4022BD1C867F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:22.752 14:24:28 -- common/autotest_common.sh@653 -- # es=1 00:18:22.752 14:24:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.752 14:24:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.752 14:24:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.752 14:24:28 -- fips/fips.sh@130 -- # nvmftestinit 00:18:22.752 14:24:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.752 14:24:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.752 14:24:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.752 14:24:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.752 14:24:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.752 14:24:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.752 14:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.752 14:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.752 14:24:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:22.752 14:24:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:22.752 14:24:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:22.752 14:24:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:22.752 14:24:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:22.752 14:24:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:22.752 14:24:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.752 14:24:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.752 14:24:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.752 14:24:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:22.752 14:24:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.752 14:24:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.752 14:24:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.752 14:24:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.752 14:24:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.752 14:24:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.752 14:24:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.752 14:24:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.752 14:24:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:22.752 14:24:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:23.011 Cannot find device "nvmf_tgt_br" 00:18:23.011 14:24:28 -- nvmf/common.sh@154 -- # true 00:18:23.011 14:24:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:23.011 Cannot find device "nvmf_tgt_br2" 00:18:23.011 14:24:28 -- nvmf/common.sh@155 -- # true 00:18:23.011 14:24:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:23.011 14:24:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:23.011 Cannot find device "nvmf_tgt_br" 00:18:23.011 14:24:28 -- nvmf/common.sh@157 -- # true 00:18:23.011 14:24:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:23.011 Cannot find device "nvmf_tgt_br2" 00:18:23.011 14:24:28 -- nvmf/common.sh@158 -- # true 00:18:23.011 14:24:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:23.011 14:24:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:23.011 14:24:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:23.011 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.011 14:24:28 -- nvmf/common.sh@161 -- # true 00:18:23.011 14:24:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:23.011 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.011 14:24:28 -- nvmf/common.sh@162 -- # true 00:18:23.011 14:24:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:23.011 14:24:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:23.011 14:24:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:23.011 14:24:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:23.011 14:24:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:23.011 14:24:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:23.011 14:24:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:23.011 14:24:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:23.011 14:24:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:23.011 14:24:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:23.011 14:24:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:23.011 14:24:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:23.011 14:24:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:23.011 14:24:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:23.011 14:24:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:23.011 14:24:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:23.011 14:24:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:23.011 14:24:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:23.011 14:24:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:23.270 14:24:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:23.271 14:24:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:23.271 14:24:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:23.271 14:24:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:23.271 14:24:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:23.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:23.271 00:18:23.271 --- 10.0.0.2 ping statistics --- 00:18:23.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.271 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:23.271 14:24:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:23.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:23.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:18:23.271 00:18:23.271 --- 10.0.0.3 ping statistics --- 00:18:23.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.271 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:23.271 14:24:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:23.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:23.271 00:18:23.271 --- 10.0.0.1 ping statistics --- 00:18:23.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.271 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:23.271 14:24:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.271 14:24:28 -- nvmf/common.sh@421 -- # return 0 00:18:23.271 14:24:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:23.271 14:24:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.271 14:24:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:23.271 14:24:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:23.271 14:24:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.271 14:24:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:23.271 14:24:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:23.271 14:24:28 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:23.271 14:24:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:23.271 14:24:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.271 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:18:23.271 14:24:28 -- nvmf/common.sh@469 -- # nvmfpid=90109 00:18:23.271 14:24:28 -- nvmf/common.sh@470 -- # waitforlisten 90109 00:18:23.271 14:24:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.271 14:24:28 -- common/autotest_common.sh@829 -- # '[' -z 90109 ']' 00:18:23.271 14:24:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.271 14:24:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.271 14:24:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.271 14:24:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.271 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:18:23.271 [2024-12-05 14:24:28.823257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:23.271 [2024-12-05 14:24:28.823348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.530 [2024-12-05 14:24:28.955854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.530 [2024-12-05 14:24:29.041703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.530 [2024-12-05 14:24:29.041847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.530 [2024-12-05 14:24:29.041859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.530 [2024-12-05 14:24:29.041867] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.530 [2024-12-05 14:24:29.041897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.468 14:24:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.468 14:24:29 -- common/autotest_common.sh@862 -- # return 0 00:18:24.468 14:24:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:24.468 14:24:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.468 14:24:29 -- common/autotest_common.sh@10 -- # set +x 00:18:24.468 14:24:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.468 14:24:29 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:24.468 14:24:29 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:24.468 14:24:29 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.468 14:24:29 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:24.468 14:24:29 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.468 14:24:29 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.468 14:24:29 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.468 14:24:29 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.727 [2024-12-05 14:24:30.142307] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.727 [2024-12-05 14:24:30.158242] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.727 [2024-12-05 14:24:30.158446] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.727 malloc0 00:18:24.727 14:24:30 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.727 14:24:30 -- fips/fips.sh@147 -- # bdevperf_pid=90167 00:18:24.727 14:24:30 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.727 14:24:30 -- fips/fips.sh@148 -- # waitforlisten 90167 /var/tmp/bdevperf.sock 00:18:24.727 14:24:30 -- common/autotest_common.sh@829 -- # '[' -z 90167 ']' 00:18:24.727 14:24:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.727 14:24:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.727 14:24:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.727 14:24:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.727 14:24:30 -- common/autotest_common.sh@10 -- # set +x 00:18:24.727 [2024-12-05 14:24:30.269413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:24.727 [2024-12-05 14:24:30.269474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90167 ] 00:18:24.985 [2024-12-05 14:24:30.406282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.985 [2024-12-05 14:24:30.472793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.919 14:24:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.919 14:24:31 -- common/autotest_common.sh@862 -- # return 0 00:18:25.919 14:24:31 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:25.919 [2024-12-05 14:24:31.488015] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.919 TLSTESTn1 00:18:26.178 14:24:31 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.178 Running I/O for 10 seconds... 00:18:36.156 00:18:36.156 Latency(us) 00:18:36.156 [2024-12-05T14:24:41.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.156 [2024-12-05T14:24:41.804Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.156 Verification LBA range: start 0x0 length 0x2000 00:18:36.156 TLSTESTn1 : 10.01 5731.29 22.39 0.00 0.00 22308.05 3559.80 26333.56 00:18:36.156 [2024-12-05T14:24:41.804Z] =================================================================================================================== 00:18:36.156 [2024-12-05T14:24:41.804Z] Total : 5731.29 22.39 0.00 0.00 22308.05 3559.80 26333.56 00:18:36.156 0 00:18:36.156 14:24:41 -- fips/fips.sh@1 -- # cleanup 00:18:36.156 14:24:41 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:36.156 14:24:41 -- common/autotest_common.sh@806 -- # type=--id 00:18:36.156 14:24:41 -- common/autotest_common.sh@807 -- # id=0 00:18:36.156 14:24:41 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:36.156 14:24:41 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:36.156 14:24:41 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:36.156 14:24:41 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:36.156 14:24:41 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:36.156 14:24:41 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:36.156 nvmf_trace.0 00:18:36.156 14:24:41 -- common/autotest_common.sh@821 -- # return 0 00:18:36.156 14:24:41 -- fips/fips.sh@16 -- # killprocess 90167 00:18:36.156 14:24:41 -- common/autotest_common.sh@936 -- # '[' -z 90167 ']' 00:18:36.156 14:24:41 -- common/autotest_common.sh@940 -- # kill -0 90167 00:18:36.156 14:24:41 -- common/autotest_common.sh@941 -- # uname 00:18:36.156 14:24:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.156 14:24:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90167 00:18:36.415 14:24:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:36.415 14:24:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:36.415 killing process with pid 90167 00:18:36.415 14:24:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90167' 00:18:36.415 14:24:41 -- common/autotest_common.sh@955 -- # kill 90167 00:18:36.415 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.415 00:18:36.415 Latency(us) 00:18:36.415 [2024-12-05T14:24:42.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.415 [2024-12-05T14:24:42.063Z] =================================================================================================================== 00:18:36.415 [2024-12-05T14:24:42.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.415 14:24:41 -- common/autotest_common.sh@960 -- # wait 90167 00:18:36.415 14:24:41 -- fips/fips.sh@17 -- # nvmftestfini 00:18:36.415 14:24:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:36.415 14:24:41 -- nvmf/common.sh@116 -- # sync 00:18:36.415 14:24:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:36.415 14:24:42 -- nvmf/common.sh@119 -- # set +e 00:18:36.415 14:24:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:36.415 14:24:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:36.674 rmmod nvme_tcp 00:18:36.674 rmmod nvme_fabrics 00:18:36.674 rmmod nvme_keyring 00:18:36.674 14:24:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:36.674 14:24:42 -- nvmf/common.sh@123 -- # set -e 00:18:36.674 14:24:42 -- nvmf/common.sh@124 -- # return 0 00:18:36.674 14:24:42 -- nvmf/common.sh@477 -- # '[' -n 90109 ']' 00:18:36.674 14:24:42 -- nvmf/common.sh@478 -- # killprocess 90109 00:18:36.674 14:24:42 -- common/autotest_common.sh@936 -- # '[' -z 90109 ']' 00:18:36.674 14:24:42 -- common/autotest_common.sh@940 -- # kill -0 90109 00:18:36.675 14:24:42 -- common/autotest_common.sh@941 -- # uname 00:18:36.675 14:24:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.675 14:24:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90109 00:18:36.675 14:24:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.675 killing process with pid 90109 00:18:36.675 14:24:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.675 14:24:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90109' 00:18:36.675 14:24:42 -- common/autotest_common.sh@955 -- # kill 90109 00:18:36.675 14:24:42 -- common/autotest_common.sh@960 -- # wait 90109 00:18:36.934 14:24:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.934 14:24:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:36.934 14:24:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:36.934 14:24:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.934 14:24:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:36.934 14:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.934 14:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.934 14:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.934 14:24:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:36.934 14:24:42 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:36.934 00:18:36.934 real 0m14.460s 00:18:36.934 user 0m18.190s 00:18:36.934 sys 0m6.683s 00:18:36.934 14:24:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.934 ************************************ 00:18:36.934 END TEST nvmf_fips 00:18:36.934 ************************************ 00:18:36.934 14:24:42 -- common/autotest_common.sh@10 -- # set +x 00:18:36.934 14:24:42 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:36.934 14:24:42 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:36.934 14:24:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.934 14:24:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.934 14:24:42 -- common/autotest_common.sh@10 -- # set +x 00:18:36.934 ************************************ 00:18:36.934 START TEST nvmf_fuzz 00:18:36.934 ************************************ 00:18:36.934 14:24:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:36.934 * Looking for test storage... 00:18:37.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.193 14:24:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:37.193 14:24:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:37.193 14:24:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:37.193 14:24:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:37.193 14:24:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:37.193 14:24:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:37.193 14:24:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:37.193 14:24:42 -- scripts/common.sh@335 -- # IFS=.-: 00:18:37.193 14:24:42 -- scripts/common.sh@335 -- # read -ra ver1 00:18:37.193 14:24:42 -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.193 14:24:42 -- scripts/common.sh@336 -- # read -ra ver2 00:18:37.193 14:24:42 -- scripts/common.sh@337 -- # local 'op=<' 00:18:37.193 14:24:42 -- scripts/common.sh@339 -- # ver1_l=2 00:18:37.193 14:24:42 -- scripts/common.sh@340 -- # ver2_l=1 00:18:37.193 14:24:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:37.193 14:24:42 -- scripts/common.sh@343 -- # case "$op" in 00:18:37.193 14:24:42 -- scripts/common.sh@344 -- # : 1 00:18:37.193 14:24:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:37.193 14:24:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.193 14:24:42 -- scripts/common.sh@364 -- # decimal 1 00:18:37.193 14:24:42 -- scripts/common.sh@352 -- # local d=1 00:18:37.193 14:24:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.193 14:24:42 -- scripts/common.sh@354 -- # echo 1 00:18:37.193 14:24:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:37.193 14:24:42 -- scripts/common.sh@365 -- # decimal 2 00:18:37.193 14:24:42 -- scripts/common.sh@352 -- # local d=2 00:18:37.193 14:24:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.193 14:24:42 -- scripts/common.sh@354 -- # echo 2 00:18:37.193 14:24:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:37.193 14:24:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:37.193 14:24:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:37.193 14:24:42 -- scripts/common.sh@367 -- # return 0 00:18:37.193 14:24:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.193 14:24:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.193 --rc genhtml_branch_coverage=1 00:18:37.193 --rc genhtml_function_coverage=1 00:18:37.193 --rc genhtml_legend=1 00:18:37.193 --rc geninfo_all_blocks=1 00:18:37.193 --rc geninfo_unexecuted_blocks=1 00:18:37.193 00:18:37.193 ' 00:18:37.193 14:24:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.193 --rc genhtml_branch_coverage=1 00:18:37.193 --rc genhtml_function_coverage=1 00:18:37.193 --rc genhtml_legend=1 00:18:37.193 --rc geninfo_all_blocks=1 00:18:37.193 --rc geninfo_unexecuted_blocks=1 00:18:37.193 00:18:37.193 ' 00:18:37.193 14:24:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.193 --rc genhtml_branch_coverage=1 00:18:37.193 --rc genhtml_function_coverage=1 00:18:37.193 --rc genhtml_legend=1 00:18:37.193 --rc geninfo_all_blocks=1 00:18:37.193 --rc geninfo_unexecuted_blocks=1 00:18:37.193 00:18:37.193 ' 00:18:37.193 14:24:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:37.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.193 --rc genhtml_branch_coverage=1 00:18:37.193 --rc genhtml_function_coverage=1 00:18:37.193 --rc genhtml_legend=1 00:18:37.193 --rc geninfo_all_blocks=1 00:18:37.193 --rc geninfo_unexecuted_blocks=1 00:18:37.193 00:18:37.193 ' 00:18:37.193 14:24:42 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.193 14:24:42 -- nvmf/common.sh@7 -- # uname -s 00:18:37.193 14:24:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.193 14:24:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.193 14:24:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.193 14:24:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.193 14:24:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.193 14:24:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.193 14:24:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.193 14:24:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.193 14:24:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.193 14:24:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.193 14:24:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:18:37.193 14:24:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:18:37.193 14:24:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.193 14:24:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.193 14:24:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.193 14:24:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.193 14:24:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.194 14:24:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.194 14:24:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.194 14:24:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.194 14:24:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.194 14:24:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.194 14:24:42 -- paths/export.sh@5 -- # export PATH 00:18:37.194 14:24:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.194 14:24:42 -- nvmf/common.sh@46 -- # : 0 00:18:37.194 14:24:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:37.194 14:24:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:37.194 14:24:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:37.194 14:24:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.194 14:24:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.194 14:24:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:37.194 14:24:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:37.194 14:24:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:37.194 14:24:42 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:37.194 14:24:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:37.194 14:24:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.194 14:24:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:37.194 14:24:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:37.194 14:24:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:37.194 14:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.194 14:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.194 14:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.194 14:24:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:37.194 14:24:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:37.194 14:24:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:37.194 14:24:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:37.194 14:24:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:37.194 14:24:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:37.194 14:24:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.194 14:24:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.194 14:24:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.194 14:24:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:37.194 14:24:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.194 14:24:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.194 14:24:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.194 14:24:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.194 14:24:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.194 14:24:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.194 14:24:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.194 14:24:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.194 14:24:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:37.194 14:24:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:37.194 Cannot find device "nvmf_tgt_br" 00:18:37.194 14:24:42 -- nvmf/common.sh@154 -- # true 00:18:37.194 14:24:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.194 Cannot find device "nvmf_tgt_br2" 00:18:37.194 14:24:42 -- nvmf/common.sh@155 -- # true 00:18:37.194 14:24:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:37.194 14:24:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:37.194 Cannot find device "nvmf_tgt_br" 00:18:37.194 14:24:42 -- nvmf/common.sh@157 -- # true 00:18:37.194 14:24:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:37.194 Cannot find device "nvmf_tgt_br2" 00:18:37.194 14:24:42 -- nvmf/common.sh@158 -- # true 00:18:37.194 14:24:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:37.194 14:24:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:37.194 14:24:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.194 14:24:42 -- nvmf/common.sh@161 -- # true 00:18:37.194 14:24:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.194 14:24:42 -- nvmf/common.sh@162 -- # true 00:18:37.194 14:24:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.455 14:24:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.455 14:24:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.455 14:24:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.455 14:24:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.455 14:24:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.455 14:24:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.455 14:24:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:37.455 14:24:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:37.455 14:24:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:37.455 14:24:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:37.455 14:24:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:37.455 14:24:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:37.455 14:24:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.455 14:24:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.455 14:24:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.455 14:24:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:37.455 14:24:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:37.455 14:24:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.455 14:24:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.455 14:24:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.455 14:24:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.455 14:24:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.455 14:24:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:37.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:18:37.455 00:18:37.455 --- 10.0.0.2 ping statistics --- 00:18:37.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.455 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:37.455 14:24:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:37.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:37.455 00:18:37.455 --- 10.0.0.3 ping statistics --- 00:18:37.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.455 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:37.455 14:24:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:37.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:18:37.455 00:18:37.455 --- 10.0.0.1 ping statistics --- 00:18:37.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.455 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:18:37.455 14:24:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.455 14:24:43 -- nvmf/common.sh@421 -- # return 0 00:18:37.455 14:24:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:37.455 14:24:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.455 14:24:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:37.455 14:24:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:37.455 14:24:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.455 14:24:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:37.455 14:24:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:37.455 14:24:43 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90521 00:18:37.455 14:24:43 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:37.455 14:24:43 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:37.455 14:24:43 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90521 00:18:37.455 14:24:43 -- common/autotest_common.sh@829 -- # '[' -z 90521 ']' 00:18:37.455 14:24:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.455 14:24:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.455 14:24:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.455 14:24:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.455 14:24:43 -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 14:24:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.861 14:24:44 -- common/autotest_common.sh@862 -- # return 0 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.861 14:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.861 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 14:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:38.861 14:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.861 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 Malloc0 00:18:38.861 14:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.861 14:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.861 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 14:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.861 14:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.861 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 14:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.861 14:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.861 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 14:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:38.861 14:24:44 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:39.120 Shutting down the fuzz application 00:18:39.120 14:24:44 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:39.379 Shutting down the fuzz application 00:18:39.379 14:24:44 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.379 14:24:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.379 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:18:39.379 14:24:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.379 14:24:44 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:39.379 14:24:44 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:39.379 14:24:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:39.379 14:24:44 -- nvmf/common.sh@116 -- # sync 00:18:39.379 14:24:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:39.379 14:24:44 -- nvmf/common.sh@119 -- # set +e 00:18:39.379 14:24:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:39.379 14:24:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:39.379 rmmod nvme_tcp 00:18:39.379 rmmod nvme_fabrics 00:18:39.379 rmmod nvme_keyring 00:18:39.379 14:24:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:39.379 14:24:44 -- nvmf/common.sh@123 -- # set -e 00:18:39.379 14:24:44 -- nvmf/common.sh@124 -- # return 0 00:18:39.379 14:24:44 -- nvmf/common.sh@477 -- # '[' -n 90521 ']' 00:18:39.379 14:24:44 -- nvmf/common.sh@478 -- # killprocess 90521 00:18:39.379 14:24:44 -- common/autotest_common.sh@936 -- # '[' -z 90521 ']' 00:18:39.379 14:24:44 -- common/autotest_common.sh@940 -- # kill -0 90521 00:18:39.379 14:24:44 -- common/autotest_common.sh@941 -- # uname 00:18:39.379 14:24:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.379 14:24:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90521 00:18:39.379 14:24:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:39.379 14:24:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:39.379 killing process with pid 90521 00:18:39.379 14:24:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90521' 00:18:39.379 14:24:45 -- common/autotest_common.sh@955 -- # kill 90521 00:18:39.379 14:24:45 -- common/autotest_common.sh@960 -- # wait 90521 00:18:39.636 14:24:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:39.636 14:24:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:39.636 14:24:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:39.636 14:24:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.636 14:24:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:39.636 14:24:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.636 14:24:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.636 14:24:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.636 14:24:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:39.636 14:24:45 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:39.894 00:18:39.894 real 0m2.776s 00:18:39.894 user 0m2.823s 00:18:39.894 sys 0m0.753s 00:18:39.894 14:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:39.894 14:24:45 -- common/autotest_common.sh@10 -- # set +x 00:18:39.894 ************************************ 00:18:39.894 END TEST nvmf_fuzz 00:18:39.894 ************************************ 00:18:39.894 14:24:45 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:39.894 14:24:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.894 14:24:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.894 14:24:45 -- common/autotest_common.sh@10 -- # set +x 00:18:39.894 ************************************ 00:18:39.894 START TEST nvmf_multiconnection 00:18:39.894 ************************************ 00:18:39.894 14:24:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:39.894 * Looking for test storage... 00:18:39.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:39.894 14:24:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:39.894 14:24:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:39.894 14:24:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:40.154 14:24:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:40.154 14:24:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:40.154 14:24:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.154 14:24:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.154 14:24:45 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.154 14:24:45 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.154 14:24:45 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.154 14:24:45 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.154 14:24:45 -- scripts/common.sh@337 -- # local 'op=<' 00:18:40.154 14:24:45 -- scripts/common.sh@339 -- # ver1_l=2 00:18:40.154 14:24:45 -- scripts/common.sh@340 -- # ver2_l=1 00:18:40.154 14:24:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.154 14:24:45 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.154 14:24:45 -- scripts/common.sh@344 -- # : 1 00:18:40.154 14:24:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.154 14:24:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.154 14:24:45 -- scripts/common.sh@364 -- # decimal 1 00:18:40.154 14:24:45 -- scripts/common.sh@352 -- # local d=1 00:18:40.154 14:24:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.154 14:24:45 -- scripts/common.sh@354 -- # echo 1 00:18:40.154 14:24:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.154 14:24:45 -- scripts/common.sh@365 -- # decimal 2 00:18:40.154 14:24:45 -- scripts/common.sh@352 -- # local d=2 00:18:40.154 14:24:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.154 14:24:45 -- scripts/common.sh@354 -- # echo 2 00:18:40.154 14:24:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:40.154 14:24:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.154 14:24:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.154 14:24:45 -- scripts/common.sh@367 -- # return 0 00:18:40.154 14:24:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.154 14:24:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.154 --rc genhtml_branch_coverage=1 00:18:40.154 --rc genhtml_function_coverage=1 00:18:40.154 --rc genhtml_legend=1 00:18:40.154 --rc geninfo_all_blocks=1 00:18:40.154 --rc geninfo_unexecuted_blocks=1 00:18:40.154 00:18:40.154 ' 00:18:40.154 14:24:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.154 --rc genhtml_branch_coverage=1 00:18:40.154 --rc genhtml_function_coverage=1 00:18:40.154 --rc genhtml_legend=1 00:18:40.154 --rc geninfo_all_blocks=1 00:18:40.154 --rc geninfo_unexecuted_blocks=1 00:18:40.154 00:18:40.154 ' 00:18:40.154 14:24:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.154 --rc genhtml_branch_coverage=1 00:18:40.154 --rc genhtml_function_coverage=1 00:18:40.154 --rc genhtml_legend=1 00:18:40.154 --rc geninfo_all_blocks=1 00:18:40.154 --rc geninfo_unexecuted_blocks=1 00:18:40.154 00:18:40.154 ' 00:18:40.154 14:24:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:40.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.154 --rc genhtml_branch_coverage=1 00:18:40.154 --rc genhtml_function_coverage=1 00:18:40.154 --rc genhtml_legend=1 00:18:40.154 --rc geninfo_all_blocks=1 00:18:40.154 --rc geninfo_unexecuted_blocks=1 00:18:40.154 00:18:40.154 ' 00:18:40.154 14:24:45 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.154 14:24:45 -- nvmf/common.sh@7 -- # uname -s 00:18:40.154 14:24:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.154 14:24:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.154 14:24:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.154 14:24:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.154 14:24:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.154 14:24:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.154 14:24:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.154 14:24:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.154 14:24:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.154 14:24:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.154 14:24:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:18:40.154 14:24:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:18:40.154 14:24:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.154 14:24:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.154 14:24:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.154 14:24:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.154 14:24:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.154 14:24:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.154 14:24:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.154 14:24:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.154 14:24:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.154 14:24:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.154 14:24:45 -- paths/export.sh@5 -- # export PATH 00:18:40.155 14:24:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.155 14:24:45 -- nvmf/common.sh@46 -- # : 0 00:18:40.155 14:24:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.155 14:24:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.155 14:24:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.155 14:24:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.155 14:24:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.155 14:24:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.155 14:24:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.155 14:24:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.155 14:24:45 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.155 14:24:45 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.155 14:24:45 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:40.155 14:24:45 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:40.155 14:24:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.155 14:24:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.155 14:24:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.155 14:24:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.155 14:24:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.155 14:24:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.155 14:24:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.155 14:24:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.155 14:24:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.155 14:24:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.155 14:24:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.155 14:24:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.155 14:24:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.155 14:24:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.155 14:24:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.155 14:24:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.155 14:24:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.155 14:24:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.155 14:24:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.155 14:24:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.155 14:24:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.155 14:24:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.155 14:24:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.155 14:24:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.155 14:24:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.155 14:24:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.155 14:24:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.155 14:24:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.155 Cannot find device "nvmf_tgt_br" 00:18:40.155 14:24:45 -- nvmf/common.sh@154 -- # true 00:18:40.155 14:24:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.155 Cannot find device "nvmf_tgt_br2" 00:18:40.155 14:24:45 -- nvmf/common.sh@155 -- # true 00:18:40.155 14:24:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:40.155 14:24:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:40.155 Cannot find device "nvmf_tgt_br" 00:18:40.155 14:24:45 -- nvmf/common.sh@157 -- # true 00:18:40.155 14:24:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:40.155 Cannot find device "nvmf_tgt_br2" 00:18:40.155 14:24:45 -- nvmf/common.sh@158 -- # true 00:18:40.155 14:24:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:40.155 14:24:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:40.155 14:24:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.155 14:24:45 -- nvmf/common.sh@161 -- # true 00:18:40.155 14:24:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.155 14:24:45 -- nvmf/common.sh@162 -- # true 00:18:40.155 14:24:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.155 14:24:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.155 14:24:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.155 14:24:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.155 14:24:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.428 14:24:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.428 14:24:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.428 14:24:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:40.428 14:24:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:40.428 14:24:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:40.428 14:24:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:40.428 14:24:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:40.428 14:24:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:40.428 14:24:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.428 14:24:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.428 14:24:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.428 14:24:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:40.428 14:24:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:40.428 14:24:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.428 14:24:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.428 14:24:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.428 14:24:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.428 14:24:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.428 14:24:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:40.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:18:40.428 00:18:40.428 --- 10.0.0.2 ping statistics --- 00:18:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.428 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:40.428 14:24:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:40.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:18:40.428 00:18:40.428 --- 10.0.0.3 ping statistics --- 00:18:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.428 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:40.428 14:24:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:40.428 00:18:40.428 --- 10.0.0.1 ping statistics --- 00:18:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.428 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:40.428 14:24:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.428 14:24:45 -- nvmf/common.sh@421 -- # return 0 00:18:40.428 14:24:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:40.428 14:24:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.428 14:24:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:40.428 14:24:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:40.428 14:24:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.428 14:24:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:40.428 14:24:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:40.428 14:24:45 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:40.428 14:24:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:40.428 14:24:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.428 14:24:45 -- common/autotest_common.sh@10 -- # set +x 00:18:40.428 14:24:46 -- nvmf/common.sh@469 -- # nvmfpid=90734 00:18:40.428 14:24:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:40.428 14:24:46 -- nvmf/common.sh@470 -- # waitforlisten 90734 00:18:40.428 14:24:46 -- common/autotest_common.sh@829 -- # '[' -z 90734 ']' 00:18:40.428 14:24:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.428 14:24:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.428 14:24:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.428 14:24:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.428 14:24:46 -- common/autotest_common.sh@10 -- # set +x 00:18:40.428 [2024-12-05 14:24:46.059576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:40.428 [2024-12-05 14:24:46.059678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.687 [2024-12-05 14:24:46.195403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.687 [2024-12-05 14:24:46.264671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:40.687 [2024-12-05 14:24:46.264880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.687 [2024-12-05 14:24:46.264896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.687 [2024-12-05 14:24:46.264906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.687 [2024-12-05 14:24:46.265085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.687 [2024-12-05 14:24:46.265240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.687 [2024-12-05 14:24:46.265909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.687 [2024-12-05 14:24:46.265934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.622 14:24:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.622 14:24:47 -- common/autotest_common.sh@862 -- # return 0 00:18:41.622 14:24:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:41.622 14:24:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.622 14:24:47 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 [2024-12-05 14:24:47.089583] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:41.622 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.622 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 Malloc1 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 [2024-12-05 14:24:47.160217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.622 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 Malloc2 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.622 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 Malloc3 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.622 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.622 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:41.622 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.622 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.880 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 Malloc4 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.880 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 Malloc5 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.880 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 Malloc6 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.880 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 Malloc7 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.880 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.880 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:41.880 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.880 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.880 Malloc8 00:18:41.880 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.881 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:41.881 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.881 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.881 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.881 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:41.881 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.881 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:41.881 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.881 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:41.881 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.881 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.139 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 Malloc9 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.139 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 Malloc10 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.139 14:24:47 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 Malloc11 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:42.139 14:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.139 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:18:42.139 14:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.139 14:24:47 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:42.139 14:24:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.139 14:24:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.397 14:24:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:42.397 14:24:47 -- common/autotest_common.sh@1187 -- # local i=0 00:18:42.397 14:24:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.397 14:24:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:42.397 14:24:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:44.296 14:24:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:44.296 14:24:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:44.296 14:24:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:44.296 14:24:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:44.296 14:24:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.296 14:24:49 -- common/autotest_common.sh@1197 -- # return 0 00:18:44.296 14:24:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.297 14:24:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:44.555 14:24:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:44.555 14:24:50 -- common/autotest_common.sh@1187 -- # local i=0 00:18:44.555 14:24:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.555 14:24:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:44.555 14:24:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:46.456 14:24:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:46.456 14:24:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:46.456 14:24:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:46.456 14:24:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:46.456 14:24:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.456 14:24:52 -- common/autotest_common.sh@1197 -- # return 0 00:18:46.456 14:24:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.456 14:24:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:46.714 14:24:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:46.714 14:24:52 -- common/autotest_common.sh@1187 -- # local i=0 00:18:46.714 14:24:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.714 14:24:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:46.714 14:24:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:49.243 14:24:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:49.243 14:24:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:49.243 14:24:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:49.243 14:24:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:49.243 14:24:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.243 14:24:54 -- common/autotest_common.sh@1197 -- # return 0 00:18:49.243 14:24:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.243 14:24:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:49.243 14:24:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:49.243 14:24:54 -- common/autotest_common.sh@1187 -- # local i=0 00:18:49.243 14:24:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.243 14:24:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:49.243 14:24:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:51.138 14:24:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:51.138 14:24:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:51.138 14:24:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:51.138 14:24:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:51.138 14:24:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.138 14:24:56 -- common/autotest_common.sh@1197 -- # return 0 00:18:51.138 14:24:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.138 14:24:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:51.138 14:24:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:51.138 14:24:56 -- common/autotest_common.sh@1187 -- # local i=0 00:18:51.138 14:24:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.138 14:24:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:51.138 14:24:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:53.040 14:24:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:53.040 14:24:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:53.040 14:24:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:53.299 14:24:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:53.299 14:24:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.299 14:24:58 -- common/autotest_common.sh@1197 -- # return 0 00:18:53.299 14:24:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.299 14:24:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:53.299 14:24:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:53.299 14:24:58 -- common/autotest_common.sh@1187 -- # local i=0 00:18:53.299 14:24:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.299 14:24:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:53.299 14:24:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:55.828 14:25:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:55.828 14:25:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:55.828 14:25:00 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:55.828 14:25:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:55.828 14:25:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.828 14:25:00 -- common/autotest_common.sh@1197 -- # return 0 00:18:55.828 14:25:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.828 14:25:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:55.828 14:25:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:55.828 14:25:01 -- common/autotest_common.sh@1187 -- # local i=0 00:18:55.828 14:25:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.829 14:25:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:55.829 14:25:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:57.728 14:25:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:57.728 14:25:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:57.728 14:25:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:57.728 14:25:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:57.728 14:25:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.728 14:25:03 -- common/autotest_common.sh@1197 -- # return 0 00:18:57.728 14:25:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.728 14:25:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:57.728 14:25:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:57.728 14:25:03 -- common/autotest_common.sh@1187 -- # local i=0 00:18:57.728 14:25:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.728 14:25:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:57.728 14:25:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:00.266 14:25:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:00.266 14:25:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:00.266 14:25:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:19:00.266 14:25:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:00.266 14:25:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.266 14:25:05 -- common/autotest_common.sh@1197 -- # return 0 00:19:00.266 14:25:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.266 14:25:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:00.266 14:25:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:00.266 14:25:05 -- common/autotest_common.sh@1187 -- # local i=0 00:19:00.266 14:25:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.266 14:25:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:00.266 14:25:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:02.168 14:25:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:02.169 14:25:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:02.169 14:25:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:19:02.169 14:25:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:02.169 14:25:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.169 14:25:07 -- common/autotest_common.sh@1197 -- # return 0 00:19:02.169 14:25:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.169 14:25:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:02.169 14:25:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:02.169 14:25:07 -- common/autotest_common.sh@1187 -- # local i=0 00:19:02.169 14:25:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.169 14:25:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:02.169 14:25:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:04.168 14:25:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:04.168 14:25:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:04.168 14:25:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:19:04.168 14:25:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:04.168 14:25:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.168 14:25:09 -- common/autotest_common.sh@1197 -- # return 0 00:19:04.168 14:25:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.168 14:25:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:04.425 14:25:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:04.425 14:25:09 -- common/autotest_common.sh@1187 -- # local i=0 00:19:04.425 14:25:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.425 14:25:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:04.425 14:25:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:06.326 14:25:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:06.326 14:25:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:06.326 14:25:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:19:06.326 14:25:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:06.326 14:25:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.326 14:25:11 -- common/autotest_common.sh@1197 -- # return 0 00:19:06.326 14:25:11 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:06.326 [global] 00:19:06.326 thread=1 00:19:06.326 invalidate=1 00:19:06.326 rw=read 00:19:06.326 time_based=1 00:19:06.326 runtime=10 00:19:06.326 ioengine=libaio 00:19:06.326 direct=1 00:19:06.326 bs=262144 00:19:06.326 iodepth=64 00:19:06.326 norandommap=1 00:19:06.326 numjobs=1 00:19:06.326 00:19:06.326 [job0] 00:19:06.326 filename=/dev/nvme0n1 00:19:06.585 [job1] 00:19:06.585 filename=/dev/nvme10n1 00:19:06.585 [job2] 00:19:06.585 filename=/dev/nvme1n1 00:19:06.585 [job3] 00:19:06.585 filename=/dev/nvme2n1 00:19:06.585 [job4] 00:19:06.585 filename=/dev/nvme3n1 00:19:06.585 [job5] 00:19:06.585 filename=/dev/nvme4n1 00:19:06.585 [job6] 00:19:06.585 filename=/dev/nvme5n1 00:19:06.585 [job7] 00:19:06.585 filename=/dev/nvme6n1 00:19:06.585 [job8] 00:19:06.585 filename=/dev/nvme7n1 00:19:06.585 [job9] 00:19:06.585 filename=/dev/nvme8n1 00:19:06.585 [job10] 00:19:06.585 filename=/dev/nvme9n1 00:19:06.585 Could not set queue depth (nvme0n1) 00:19:06.585 Could not set queue depth (nvme10n1) 00:19:06.585 Could not set queue depth (nvme1n1) 00:19:06.585 Could not set queue depth (nvme2n1) 00:19:06.585 Could not set queue depth (nvme3n1) 00:19:06.585 Could not set queue depth (nvme4n1) 00:19:06.585 Could not set queue depth (nvme5n1) 00:19:06.585 Could not set queue depth (nvme6n1) 00:19:06.585 Could not set queue depth (nvme7n1) 00:19:06.585 Could not set queue depth (nvme8n1) 00:19:06.585 Could not set queue depth (nvme9n1) 00:19:06.845 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:06.845 fio-3.35 00:19:06.845 Starting 11 threads 00:19:19.126 00:19:19.127 job0: (groupid=0, jobs=1): err= 0: pid=91217: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=645, BW=161MiB/s (169MB/s)(1639MiB/10146msec) 00:19:19.127 slat (usec): min=16, max=65469, avg=1472.75, stdev=5226.46 00:19:19.127 clat (msec): min=10, max=360, avg=97.38, stdev=46.50 00:19:19.127 lat (msec): min=10, max=360, avg=98.85, stdev=47.32 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 69], 00:19:19.127 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 91], 00:19:19.127 | 70.00th=[ 99], 80.00th=[ 124], 90.00th=[ 176], 95.00th=[ 207], 00:19:19.127 | 99.00th=[ 234], 99.50th=[ 249], 99.90th=[ 338], 99.95th=[ 338], 00:19:19.127 | 99.99th=[ 359] 00:19:19.127 bw ( KiB/s): min=72047, max=303520, per=11.94%, avg=166059.25, stdev=64346.21, samples=20 00:19:19.127 iops : min= 281, max= 1185, avg=648.55, stdev=251.32, samples=20 00:19:19.127 lat (msec) : 20=0.40%, 50=11.15%, 100=59.67%, 250=28.33%, 500=0.44% 00:19:19.127 cpu : usr=0.27%, sys=2.15%, ctx=1750, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=6554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job1: (groupid=0, jobs=1): err= 0: pid=91218: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=324, BW=81.0MiB/s (85.0MB/s)(822MiB/10144msec) 00:19:19.127 slat (usec): min=22, max=100366, avg=2993.03, stdev=9372.17 00:19:19.127 clat (msec): min=117, max=369, avg=193.97, stdev=36.51 00:19:19.127 lat (msec): min=117, max=369, avg=196.96, stdev=37.38 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 129], 5.00th=[ 144], 10.00th=[ 153], 20.00th=[ 169], 00:19:19.127 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 197], 00:19:19.127 | 70.00th=[ 203], 80.00th=[ 211], 90.00th=[ 230], 95.00th=[ 264], 00:19:19.127 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 372], 00:19:19.127 | 99.99th=[ 372] 00:19:19.127 bw ( KiB/s): min=45568, max=104960, per=5.93%, avg=82528.00, stdev=13705.37, samples=20 00:19:19.127 iops : min= 178, max= 410, avg=322.30, stdev=53.56, samples=20 00:19:19.127 lat (msec) : 250=92.40%, 500=7.60% 00:19:19.127 cpu : usr=0.12%, sys=1.40%, ctx=616, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=3288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job2: (groupid=0, jobs=1): err= 0: pid=91219: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=664, BW=166MiB/s (174MB/s)(1673MiB/10071msec) 00:19:19.127 slat (usec): min=21, max=107300, avg=1490.00, stdev=5385.86 00:19:19.127 clat (msec): min=27, max=205, avg=94.65, stdev=20.57 00:19:19.127 lat (msec): min=28, max=250, avg=96.14, stdev=21.10 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 56], 5.00th=[ 71], 10.00th=[ 77], 20.00th=[ 83], 00:19:19.127 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 95], 00:19:19.127 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 142], 00:19:19.127 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 201], 99.95th=[ 205], 00:19:19.127 | 99.99th=[ 207] 00:19:19.127 bw ( KiB/s): min=99328, max=194560, per=12.20%, avg=169620.20, stdev=25305.06, samples=20 00:19:19.127 iops : min= 388, max= 760, avg=662.45, stdev=98.83, samples=20 00:19:19.127 lat (msec) : 50=0.37%, 100=73.27%, 250=26.36% 00:19:19.127 cpu : usr=0.23%, sys=2.47%, ctx=1410, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=6692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job3: (groupid=0, jobs=1): err= 0: pid=91220: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=643, BW=161MiB/s (169MB/s)(1620MiB/10068msec) 00:19:19.127 slat (usec): min=20, max=75819, avg=1527.10, stdev=5273.51 00:19:19.127 clat (msec): min=54, max=236, avg=97.68, stdev=19.56 00:19:19.127 lat (msec): min=54, max=237, avg=99.21, stdev=20.16 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 63], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 85], 00:19:19.127 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 94], 60.00th=[ 99], 00:19:19.127 | 70.00th=[ 103], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 140], 00:19:19.127 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 203], 00:19:19.127 | 99.99th=[ 236] 00:19:19.127 bw ( KiB/s): min=94909, max=185485, per=11.81%, avg=164216.85, stdev=23921.21, samples=20 00:19:19.127 iops : min= 370, max= 724, avg=641.30, stdev=93.53, samples=20 00:19:19.127 lat (msec) : 100=65.68%, 250=34.32% 00:19:19.127 cpu : usr=0.26%, sys=2.26%, ctx=1334, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job4: (groupid=0, jobs=1): err= 0: pid=91221: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=448, BW=112MiB/s (117MB/s)(1127MiB/10062msec) 00:19:19.127 slat (usec): min=15, max=85933, avg=2103.69, stdev=7345.72 00:19:19.127 clat (msec): min=52, max=267, avg=140.44, stdev=50.64 00:19:19.127 lat (msec): min=55, max=279, avg=142.54, stdev=51.75 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 62], 5.00th=[ 72], 10.00th=[ 78], 20.00th=[ 84], 00:19:19.127 | 30.00th=[ 93], 40.00th=[ 109], 50.00th=[ 153], 60.00th=[ 176], 00:19:19.127 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 199], 95.00th=[ 209], 00:19:19.127 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 243], 99.95th=[ 257], 00:19:19.127 | 99.99th=[ 268] 00:19:19.127 bw ( KiB/s): min=78848, max=204697, per=8.18%, avg=113805.80, stdev=42508.26, samples=20 00:19:19.127 iops : min= 308, max= 799, avg=444.45, stdev=165.97, samples=20 00:19:19.127 lat (msec) : 100=34.89%, 250=65.03%, 500=0.09% 00:19:19.127 cpu : usr=0.12%, sys=1.60%, ctx=897, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=4509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job5: (groupid=0, jobs=1): err= 0: pid=91222: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=564, BW=141MiB/s (148MB/s)(1437MiB/10184msec) 00:19:19.127 slat (usec): min=21, max=73251, avg=1613.01, stdev=5800.13 00:19:19.127 clat (msec): min=29, max=445, avg=111.53, stdev=55.16 00:19:19.127 lat (msec): min=30, max=445, avg=113.14, stdev=55.89 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 47], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 75], 00:19:19.127 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 95], 00:19:19.127 | 70.00th=[ 123], 80.00th=[ 148], 90.00th=[ 194], 95.00th=[ 222], 00:19:19.127 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 447], 00:19:19.127 | 99.99th=[ 447] 00:19:19.127 bw ( KiB/s): min=65536, max=214099, per=10.46%, avg=145472.95, stdev=51058.60, samples=20 00:19:19.127 iops : min= 256, max= 836, avg=568.15, stdev=199.43, samples=20 00:19:19.127 lat (msec) : 50=1.32%, 100=62.72%, 250=33.70%, 500=2.26% 00:19:19.127 cpu : usr=0.24%, sys=1.91%, ctx=1402, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=5748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job6: (groupid=0, jobs=1): err= 0: pid=91223: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=456, BW=114MiB/s (120MB/s)(1157MiB/10150msec) 00:19:19.127 slat (usec): min=21, max=87151, avg=2109.89, stdev=7434.52 00:19:19.127 clat (usec): min=1067, max=374819, avg=138012.07, stdev=47295.62 00:19:19.127 lat (usec): min=1100, max=374878, avg=140121.96, stdev=48279.73 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 28], 5.00th=[ 74], 10.00th=[ 86], 20.00th=[ 96], 00:19:19.127 | 30.00th=[ 115], 40.00th=[ 129], 50.00th=[ 136], 60.00th=[ 142], 00:19:19.127 | 70.00th=[ 153], 80.00th=[ 178], 90.00th=[ 201], 95.00th=[ 224], 00:19:19.127 | 99.00th=[ 271], 99.50th=[ 296], 99.90th=[ 376], 99.95th=[ 376], 00:19:19.127 | 99.99th=[ 376] 00:19:19.127 bw ( KiB/s): min=64512, max=182272, per=8.40%, avg=116839.20, stdev=33015.33, samples=20 00:19:19.127 iops : min= 252, max= 712, avg=456.35, stdev=129.01, samples=20 00:19:19.127 lat (msec) : 2=0.35%, 20=0.28%, 50=1.30%, 100=20.91%, 250=75.42% 00:19:19.127 lat (msec) : 500=1.75% 00:19:19.127 cpu : usr=0.21%, sys=1.58%, ctx=1013, majf=0, minf=4098 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=4629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job7: (groupid=0, jobs=1): err= 0: pid=91224: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=391, BW=97.8MiB/s (103MB/s)(986MiB/10081msec) 00:19:19.127 slat (usec): min=22, max=110045, avg=2531.88, stdev=9231.64 00:19:19.127 clat (msec): min=61, max=298, avg=160.59, stdev=40.17 00:19:19.127 lat (msec): min=63, max=298, avg=163.12, stdev=41.52 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 73], 5.00th=[ 87], 10.00th=[ 96], 20.00th=[ 120], 00:19:19.127 | 30.00th=[ 144], 40.00th=[ 161], 50.00th=[ 176], 60.00th=[ 184], 00:19:19.127 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 207], 00:19:19.127 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 257], 99.95th=[ 292], 00:19:19.127 | 99.99th=[ 300] 00:19:19.127 bw ( KiB/s): min=76800, max=173056, per=7.14%, avg=99331.10, stdev=28410.28, samples=20 00:19:19.127 iops : min= 300, max= 676, avg=387.95, stdev=111.00, samples=20 00:19:19.127 lat (msec) : 100=14.65%, 250=85.12%, 500=0.23% 00:19:19.127 cpu : usr=0.16%, sys=1.58%, ctx=709, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=3945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job8: (groupid=0, jobs=1): err= 0: pid=91225: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=389, BW=97.4MiB/s (102MB/s)(982MiB/10081msec) 00:19:19.127 slat (usec): min=21, max=93158, avg=2542.32, stdev=8651.16 00:19:19.127 clat (msec): min=35, max=279, avg=161.41, stdev=41.16 00:19:19.127 lat (msec): min=35, max=296, avg=163.95, stdev=42.39 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 82], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 124], 00:19:19.127 | 30.00th=[ 140], 40.00th=[ 155], 50.00th=[ 174], 60.00th=[ 184], 00:19:19.127 | 70.00th=[ 192], 80.00th=[ 199], 90.00th=[ 205], 95.00th=[ 211], 00:19:19.127 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 275], 99.95th=[ 275], 00:19:19.127 | 99.99th=[ 279] 00:19:19.127 bw ( KiB/s): min=72704, max=168960, per=7.11%, avg=98871.15, stdev=27168.72, samples=20 00:19:19.127 iops : min= 284, max= 660, avg=386.15, stdev=106.16, samples=20 00:19:19.127 lat (msec) : 50=0.10%, 100=13.55%, 250=86.12%, 500=0.23% 00:19:19.127 cpu : usr=0.12%, sys=1.40%, ctx=810, majf=0, minf=4097 00:19:19.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:19.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.127 issued rwts: total=3927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.127 job9: (groupid=0, jobs=1): err= 0: pid=91226: Thu Dec 5 14:25:22 2024 00:19:19.127 read: IOPS=507, BW=127MiB/s (133MB/s)(1278MiB/10068msec) 00:19:19.127 slat (usec): min=15, max=92197, avg=1896.48, stdev=6521.75 00:19:19.127 clat (msec): min=47, max=223, avg=123.93, stdev=31.62 00:19:19.127 lat (msec): min=61, max=268, avg=125.83, stdev=32.48 00:19:19.127 clat percentiles (msec): 00:19:19.127 | 1.00th=[ 67], 5.00th=[ 79], 10.00th=[ 83], 20.00th=[ 90], 00:19:19.127 | 30.00th=[ 103], 40.00th=[ 121], 50.00th=[ 128], 60.00th=[ 133], 00:19:19.127 | 70.00th=[ 140], 80.00th=[ 148], 90.00th=[ 161], 95.00th=[ 184], 00:19:19.128 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 220], 99.95th=[ 224], 00:19:19.128 | 99.99th=[ 224] 00:19:19.128 bw ( KiB/s): min=89421, max=193024, per=9.29%, avg=129152.80, stdev=30535.83, samples=20 00:19:19.128 iops : min= 349, max= 754, avg=504.45, stdev=119.26, samples=20 00:19:19.128 lat (msec) : 50=0.02%, 100=28.83%, 250=71.15% 00:19:19.128 cpu : usr=0.18%, sys=1.81%, ctx=1174, majf=0, minf=4097 00:19:19.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:19.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.128 issued rwts: total=5110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.128 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.128 job10: (groupid=0, jobs=1): err= 0: pid=91227: Thu Dec 5 14:25:22 2024 00:19:19.128 read: IOPS=437, BW=109MiB/s (115MB/s)(1111MiB/10145msec) 00:19:19.128 slat (usec): min=18, max=87373, avg=2206.23, stdev=7778.37 00:19:19.128 clat (msec): min=32, max=371, avg=143.63, stdev=41.79 00:19:19.128 lat (msec): min=32, max=371, avg=145.84, stdev=42.94 00:19:19.128 clat percentiles (msec): 00:19:19.128 | 1.00th=[ 60], 5.00th=[ 85], 10.00th=[ 94], 20.00th=[ 104], 00:19:19.128 | 30.00th=[ 124], 40.00th=[ 136], 50.00th=[ 142], 60.00th=[ 150], 00:19:19.128 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 201], 95.00th=[ 220], 00:19:19.128 | 99.00th=[ 239], 99.50th=[ 288], 99.90th=[ 372], 99.95th=[ 372], 00:19:19.128 | 99.99th=[ 372] 00:19:19.128 bw ( KiB/s): min=70656, max=171008, per=8.06%, avg=112063.70, stdev=27958.52, samples=20 00:19:19.128 iops : min= 276, max= 668, avg=437.65, stdev=109.29, samples=20 00:19:19.128 lat (msec) : 50=0.47%, 100=16.72%, 250=82.24%, 500=0.56% 00:19:19.128 cpu : usr=0.15%, sys=1.61%, ctx=1026, majf=0, minf=4097 00:19:19.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:19.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:19.128 issued rwts: total=4443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.128 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:19.128 00:19:19.128 Run status group 0 (all jobs): 00:19:19.128 READ: bw=1358MiB/s (1424MB/s), 81.0MiB/s-166MiB/s (85.0MB/s-174MB/s), io=13.5GiB (14.5GB), run=10062-10184msec 00:19:19.128 00:19:19.128 Disk stats (read/write): 00:19:19.128 nvme0n1: ios=13000/0, merge=0/0, ticks=1233779/0, in_queue=1233779, util=97.58% 00:19:19.128 nvme10n1: ios=6503/0, merge=0/0, ticks=1240828/0, in_queue=1240828, util=97.29% 00:19:19.128 nvme1n1: ios=13360/0, merge=0/0, ticks=1243848/0, in_queue=1243848, util=98.08% 00:19:19.128 nvme2n1: ios=12878/0, merge=0/0, ticks=1241819/0, in_queue=1241819, util=97.68% 00:19:19.128 nvme3n1: ios=8968/0, merge=0/0, ticks=1244670/0, in_queue=1244670, util=97.30% 00:19:19.128 nvme4n1: ios=11411/0, merge=0/0, ticks=1236268/0, in_queue=1236268, util=97.92% 00:19:19.128 nvme5n1: ios=9165/0, merge=0/0, ticks=1233953/0, in_queue=1233953, util=98.16% 00:19:19.128 nvme6n1: ios=7840/0, merge=0/0, ticks=1244049/0, in_queue=1244049, util=97.94% 00:19:19.128 nvme7n1: ios=7763/0, merge=0/0, ticks=1243369/0, in_queue=1243369, util=98.39% 00:19:19.128 nvme8n1: ios=10182/0, merge=0/0, ticks=1246632/0, in_queue=1246632, util=98.24% 00:19:19.128 nvme9n1: ios=8795/0, merge=0/0, ticks=1240134/0, in_queue=1240134, util=98.35% 00:19:19.128 14:25:22 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:19.128 [global] 00:19:19.128 thread=1 00:19:19.128 invalidate=1 00:19:19.128 rw=randwrite 00:19:19.128 time_based=1 00:19:19.128 runtime=10 00:19:19.128 ioengine=libaio 00:19:19.128 direct=1 00:19:19.128 bs=262144 00:19:19.128 iodepth=64 00:19:19.128 norandommap=1 00:19:19.128 numjobs=1 00:19:19.128 00:19:19.128 [job0] 00:19:19.128 filename=/dev/nvme0n1 00:19:19.128 [job1] 00:19:19.128 filename=/dev/nvme10n1 00:19:19.128 [job2] 00:19:19.128 filename=/dev/nvme1n1 00:19:19.128 [job3] 00:19:19.128 filename=/dev/nvme2n1 00:19:19.128 [job4] 00:19:19.128 filename=/dev/nvme3n1 00:19:19.128 [job5] 00:19:19.128 filename=/dev/nvme4n1 00:19:19.128 [job6] 00:19:19.128 filename=/dev/nvme5n1 00:19:19.128 [job7] 00:19:19.128 filename=/dev/nvme6n1 00:19:19.128 [job8] 00:19:19.128 filename=/dev/nvme7n1 00:19:19.128 [job9] 00:19:19.128 filename=/dev/nvme8n1 00:19:19.128 [job10] 00:19:19.128 filename=/dev/nvme9n1 00:19:19.128 Could not set queue depth (nvme0n1) 00:19:19.128 Could not set queue depth (nvme10n1) 00:19:19.128 Could not set queue depth (nvme1n1) 00:19:19.128 Could not set queue depth (nvme2n1) 00:19:19.128 Could not set queue depth (nvme3n1) 00:19:19.128 Could not set queue depth (nvme4n1) 00:19:19.128 Could not set queue depth (nvme5n1) 00:19:19.128 Could not set queue depth (nvme6n1) 00:19:19.128 Could not set queue depth (nvme7n1) 00:19:19.128 Could not set queue depth (nvme8n1) 00:19:19.128 Could not set queue depth (nvme9n1) 00:19:19.128 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:19.128 fio-3.35 00:19:19.128 Starting 11 threads 00:19:29.111 00:19:29.111 job0: (groupid=0, jobs=1): err= 0: pid=91422: Thu Dec 5 14:25:33 2024 00:19:29.111 write: IOPS=1100, BW=275MiB/s (289MB/s)(2765MiB/10045msec); 0 zone resets 00:19:29.111 slat (usec): min=18, max=8236, avg=899.56, stdev=1607.12 00:19:29.111 clat (msec): min=9, max=109, avg=57.22, stdev=17.85 00:19:29.111 lat (msec): min=9, max=110, avg=58.12, stdev=18.10 00:19:29.111 clat percentiles (msec): 00:19:29.111 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 49], 00:19:29.111 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:19:29.111 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 100], 95.00th=[ 102], 00:19:29.111 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:19:29.111 | 99.99th=[ 110] 00:19:29.111 bw ( KiB/s): min=151249, max=327680, per=21.60%, avg=281637.05, stdev=67478.45, samples=20 00:19:29.111 iops : min= 590, max= 1280, avg=1099.85, stdev=263.75, samples=20 00:19:29.111 lat (msec) : 10=0.07%, 20=0.11%, 50=43.45%, 100=47.97%, 250=8.40% 00:19:29.111 cpu : usr=1.62%, sys=2.71%, ctx=14177, majf=0, minf=1 00:19:29.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:29.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.111 issued rwts: total=0,11059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.111 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.111 job1: (groupid=0, jobs=1): err= 0: pid=91423: Thu Dec 5 14:25:33 2024 00:19:29.111 write: IOPS=339, BW=85.0MiB/s (89.1MB/s)(864MiB/10159msec); 0 zone resets 00:19:29.111 slat (usec): min=31, max=90983, avg=2892.61, stdev=5188.37 00:19:29.111 clat (msec): min=25, max=338, avg=185.18, stdev=27.20 00:19:29.111 lat (msec): min=25, max=338, avg=188.08, stdev=27.10 00:19:29.111 clat percentiles (msec): 00:19:29.111 | 1.00th=[ 140], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 155], 00:19:29.111 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 194], 00:19:29.111 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 234], 00:19:29.111 | 99.00th=[ 279], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 338], 00:19:29.111 | 99.99th=[ 338] 00:19:29.111 bw ( KiB/s): min=55296, max=109056, per=6.65%, avg=86784.00, stdev=12455.84, samples=20 00:19:29.111 iops : min= 216, max= 426, avg=339.00, stdev=48.66, samples=20 00:19:29.111 lat (msec) : 50=0.12%, 250=98.26%, 500=1.62% 00:19:29.111 cpu : usr=1.57%, sys=0.95%, ctx=3583, majf=0, minf=1 00:19:29.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:29.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.111 issued rwts: total=0,3454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.111 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.111 job2: (groupid=0, jobs=1): err= 0: pid=91435: Thu Dec 5 14:25:33 2024 00:19:29.111 write: IOPS=298, BW=74.5MiB/s (78.1MB/s)(760MiB/10201msec); 0 zone resets 00:19:29.111 slat (usec): min=20, max=40128, avg=3285.60, stdev=5879.76 00:19:29.111 clat (msec): min=22, max=412, avg=211.30, stdev=36.28 00:19:29.111 lat (msec): min=22, max=412, avg=214.59, stdev=36.34 00:19:29.111 clat percentiles (msec): 00:19:29.111 | 1.00th=[ 74], 5.00th=[ 146], 10.00th=[ 155], 20.00th=[ 201], 00:19:29.111 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 222], 00:19:29.111 | 70.00th=[ 226], 80.00th=[ 230], 90.00th=[ 239], 95.00th=[ 247], 00:19:29.111 | 99.00th=[ 300], 99.50th=[ 359], 99.90th=[ 401], 99.95th=[ 414], 00:19:29.111 | 99.99th=[ 414] 00:19:29.111 bw ( KiB/s): min=63488, max=108761, per=5.84%, avg=76196.45, stdev=10286.34, samples=20 00:19:29.111 iops : min= 248, max= 424, avg=297.60, stdev=40.04, samples=20 00:19:29.111 lat (msec) : 50=0.53%, 100=0.92%, 250=95.30%, 500=3.26% 00:19:29.111 cpu : usr=0.54%, sys=1.17%, ctx=2430, majf=0, minf=1 00:19:29.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:29.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.111 issued rwts: total=0,3040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.111 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.111 job3: (groupid=0, jobs=1): err= 0: pid=91436: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=470, BW=118MiB/s (123MB/s)(1195MiB/10161msec); 0 zone resets 00:19:29.112 slat (usec): min=17, max=18291, avg=2071.69, stdev=4006.31 00:19:29.112 clat (msec): min=7, max=343, avg=133.89, stdev=60.99 00:19:29.112 lat (msec): min=8, max=343, avg=135.96, stdev=61.81 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 55], 00:19:29.112 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 110], 60.00th=[ 184], 00:19:29.112 | 70.00th=[ 194], 80.00th=[ 197], 90.00th=[ 197], 95.00th=[ 199], 00:19:29.112 | 99.00th=[ 205], 99.50th=[ 271], 99.90th=[ 334], 99.95th=[ 334], 00:19:29.112 | 99.99th=[ 342] 00:19:29.112 bw ( KiB/s): min=81920, max=301056, per=9.26%, avg=120755.20, stdev=65064.47, samples=20 00:19:29.112 iops : min= 320, max= 1176, avg=471.70, stdev=254.16, samples=20 00:19:29.112 lat (msec) : 10=0.02%, 20=0.10%, 50=1.09%, 100=27.49%, 250=70.67% 00:19:29.112 lat (msec) : 500=0.63% 00:19:29.112 cpu : usr=1.27%, sys=1.39%, ctx=5567, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,4780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job4: (groupid=0, jobs=1): err= 0: pid=91437: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=304, BW=76.1MiB/s (79.8MB/s)(776MiB/10196msec); 0 zone resets 00:19:29.112 slat (usec): min=18, max=35128, avg=3147.59, stdev=5626.95 00:19:29.112 clat (msec): min=29, max=410, avg=207.05, stdev=32.09 00:19:29.112 lat (msec): min=29, max=410, avg=210.19, stdev=32.09 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 97], 5.00th=[ 148], 10.00th=[ 159], 20.00th=[ 192], 00:19:29.112 | 30.00th=[ 203], 40.00th=[ 209], 50.00th=[ 215], 60.00th=[ 218], 00:19:29.112 | 70.00th=[ 220], 80.00th=[ 222], 90.00th=[ 230], 95.00th=[ 239], 00:19:29.112 | 99.00th=[ 300], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 409], 00:19:29.112 | 99.99th=[ 409] 00:19:29.112 bw ( KiB/s): min=69632, max=102912, per=5.97%, avg=77824.00, stdev=8416.30, samples=20 00:19:29.112 iops : min= 272, max= 402, avg=304.00, stdev=32.88, samples=20 00:19:29.112 lat (msec) : 50=0.16%, 100=0.90%, 250=97.45%, 500=1.48% 00:19:29.112 cpu : usr=0.89%, sys=0.81%, ctx=3702, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,3103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job5: (groupid=0, jobs=1): err= 0: pid=91438: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=288, BW=72.2MiB/s (75.7MB/s)(737MiB/10203msec); 0 zone resets 00:19:29.112 slat (usec): min=19, max=46936, avg=3320.90, stdev=6072.10 00:19:29.112 clat (msec): min=50, max=413, avg=218.07, stdev=31.44 00:19:29.112 lat (msec): min=50, max=413, avg=221.39, stdev=31.47 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 103], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 203], 00:19:29.112 | 30.00th=[ 213], 40.00th=[ 220], 50.00th=[ 224], 60.00th=[ 226], 00:19:29.112 | 70.00th=[ 232], 80.00th=[ 236], 90.00th=[ 245], 95.00th=[ 249], 00:19:29.112 | 99.00th=[ 317], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 414], 00:19:29.112 | 99.99th=[ 414] 00:19:29.112 bw ( KiB/s): min=63488, max=90112, per=5.66%, avg=73830.40, stdev=7171.08, samples=20 00:19:29.112 iops : min= 248, max= 352, avg=288.40, stdev=28.01, samples=20 00:19:29.112 lat (msec) : 100=0.78%, 250=95.62%, 500=3.60% 00:19:29.112 cpu : usr=0.68%, sys=0.76%, ctx=2486, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,2948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job6: (groupid=0, jobs=1): err= 0: pid=91439: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=295, BW=73.8MiB/s (77.4MB/s)(753MiB/10203msec); 0 zone resets 00:19:29.112 slat (usec): min=17, max=60427, avg=3315.37, stdev=6011.11 00:19:29.112 clat (msec): min=12, max=422, avg=213.23, stdev=37.10 00:19:29.112 lat (msec): min=12, max=422, avg=216.54, stdev=37.15 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 53], 5.00th=[ 148], 10.00th=[ 167], 20.00th=[ 194], 00:19:29.112 | 30.00th=[ 205], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 228], 00:19:29.112 | 70.00th=[ 232], 80.00th=[ 236], 90.00th=[ 243], 95.00th=[ 249], 00:19:29.112 | 99.00th=[ 305], 99.50th=[ 363], 99.90th=[ 401], 99.95th=[ 422], 00:19:29.112 | 99.99th=[ 422] 00:19:29.112 bw ( KiB/s): min=65536, max=100352, per=5.79%, avg=75494.40, stdev=9602.34, samples=20 00:19:29.112 iops : min= 256, max= 392, avg=294.90, stdev=37.51, samples=20 00:19:29.112 lat (msec) : 20=0.27%, 50=0.66%, 100=0.27%, 250=95.55%, 500=3.25% 00:19:29.112 cpu : usr=0.55%, sys=1.05%, ctx=2340, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,3012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job7: (groupid=0, jobs=1): err= 0: pid=91440: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=342, BW=85.6MiB/s (89.7MB/s)(869MiB/10157msec); 0 zone resets 00:19:29.112 slat (usec): min=31, max=38393, avg=2872.22, stdev=4973.96 00:19:29.112 clat (msec): min=28, max=342, avg=184.03, stdev=28.00 00:19:29.112 lat (msec): min=28, max=342, avg=186.91, stdev=27.98 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 123], 5.00th=[ 144], 10.00th=[ 150], 20.00th=[ 155], 00:19:29.112 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 194], 00:19:29.112 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 232], 00:19:29.112 | 99.00th=[ 253], 99.50th=[ 292], 99.90th=[ 330], 99.95th=[ 342], 00:19:29.112 | 99.99th=[ 342] 00:19:29.112 bw ( KiB/s): min=67719, max=108544, per=6.70%, avg=87379.55, stdev=11118.67, samples=20 00:19:29.112 iops : min= 264, max= 424, avg=341.30, stdev=43.48, samples=20 00:19:29.112 lat (msec) : 50=0.35%, 100=0.46%, 250=97.38%, 500=1.81% 00:19:29.112 cpu : usr=1.44%, sys=1.15%, ctx=4216, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,3476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job8: (groupid=0, jobs=1): err= 0: pid=91441: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=986, BW=247MiB/s (259MB/s)(2478MiB/10046msec); 0 zone resets 00:19:29.112 slat (usec): min=17, max=44640, avg=974.17, stdev=2019.00 00:19:29.112 clat (msec): min=6, max=246, avg=63.87, stdev=34.35 00:19:29.112 lat (msec): min=6, max=247, avg=64.84, stdev=34.81 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 31], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 51], 00:19:29.112 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 53], 60.00th=[ 54], 00:19:29.112 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 107], 95.00th=[ 110], 00:19:29.112 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 245], 99.95th=[ 245], 00:19:29.112 | 99.99th=[ 247] 00:19:29.112 bw ( KiB/s): min=71680, max=314368, per=19.33%, avg=252134.40, stdev=87540.65, samples=20 00:19:29.112 iops : min= 280, max= 1228, avg=984.90, stdev=341.96, samples=20 00:19:29.112 lat (msec) : 10=0.03%, 20=0.27%, 50=13.82%, 100=71.48%, 250=14.40% 00:19:29.112 cpu : usr=1.43%, sys=2.66%, ctx=12170, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,9912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job9: (groupid=0, jobs=1): err= 0: pid=91442: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=364, BW=91.2MiB/s (95.7MB/s)(931MiB/10204msec); 0 zone resets 00:19:29.112 slat (usec): min=17, max=71493, avg=2605.61, stdev=5232.73 00:19:29.112 clat (msec): min=12, max=414, avg=172.66, stdev=69.68 00:19:29.112 lat (msec): min=12, max=414, avg=175.27, stdev=70.58 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 28], 5.00th=[ 88], 10.00th=[ 96], 20.00th=[ 101], 00:19:29.112 | 30.00th=[ 103], 40.00th=[ 108], 50.00th=[ 213], 60.00th=[ 220], 00:19:29.112 | 70.00th=[ 226], 80.00th=[ 234], 90.00th=[ 241], 95.00th=[ 253], 00:19:29.112 | 99.00th=[ 296], 99.50th=[ 347], 99.90th=[ 401], 99.95th=[ 414], 00:19:29.112 | 99.99th=[ 414] 00:19:29.112 bw ( KiB/s): min=63488, max=163840, per=7.19%, avg=93703.15, stdev=38280.72, samples=20 00:19:29.112 iops : min= 248, max= 640, avg=366.00, stdev=149.55, samples=20 00:19:29.112 lat (msec) : 20=0.32%, 50=2.23%, 100=15.15%, 250=76.66%, 500=5.64% 00:19:29.112 cpu : usr=1.16%, sys=0.88%, ctx=4635, majf=0, minf=1 00:19:29.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:29.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.112 issued rwts: total=0,3724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.112 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.112 job10: (groupid=0, jobs=1): err= 0: pid=91443: Thu Dec 5 14:25:33 2024 00:19:29.112 write: IOPS=341, BW=85.4MiB/s (89.5MB/s)(868MiB/10165msec); 0 zone resets 00:19:29.112 slat (usec): min=24, max=40477, avg=2878.90, stdev=5054.18 00:19:29.112 clat (msec): min=6, max=345, avg=184.43, stdev=29.66 00:19:29.112 lat (msec): min=6, max=345, avg=187.31, stdev=29.67 00:19:29.112 clat percentiles (msec): 00:19:29.112 | 1.00th=[ 110], 5.00th=[ 144], 10.00th=[ 150], 20.00th=[ 155], 00:19:29.112 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 194], 00:19:29.112 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 241], 00:19:29.112 | 99.00th=[ 257], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 347], 00:19:29.112 | 99.99th=[ 347] 00:19:29.112 bw ( KiB/s): min=65024, max=108544, per=6.69%, avg=87277.35, stdev=11118.39, samples=20 00:19:29.112 iops : min= 254, max= 424, avg=340.65, stdev=43.42, samples=20 00:19:29.112 lat (msec) : 10=0.09%, 50=0.23%, 100=0.58%, 250=95.39%, 500=3.72% 00:19:29.113 cpu : usr=0.74%, sys=0.91%, ctx=4804, majf=0, minf=1 00:19:29.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:29.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:29.113 issued rwts: total=0,3471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.113 00:19:29.113 Run status group 0 (all jobs): 00:19:29.113 WRITE: bw=1273MiB/s (1335MB/s), 72.2MiB/s-275MiB/s (75.7MB/s-289MB/s), io=12.7GiB (13.6GB), run=10045-10204msec 00:19:29.113 00:19:29.113 Disk stats (read/write): 00:19:29.113 nvme0n1: ios=49/22004, merge=0/0, ticks=35/1220398, in_queue=1220433, util=97.92% 00:19:29.113 nvme10n1: ios=49/6775, merge=0/0, ticks=41/1210963, in_queue=1211004, util=97.99% 00:19:29.113 nvme1n1: ios=34/5950, merge=0/0, ticks=34/1207736, in_queue=1207770, util=97.99% 00:19:29.113 nvme2n1: ios=0/9431, merge=0/0, ticks=0/1211060, in_queue=1211060, util=98.03% 00:19:29.113 nvme3n1: ios=0/6076, merge=0/0, ticks=0/1210035, in_queue=1210035, util=97.99% 00:19:29.113 nvme4n1: ios=0/5762, merge=0/0, ticks=0/1209555, in_queue=1209555, util=98.24% 00:19:29.113 nvme5n1: ios=0/5900, merge=0/0, ticks=0/1208407, in_queue=1208407, util=98.41% 00:19:29.113 nvme6n1: ios=0/6824, merge=0/0, ticks=0/1211060, in_queue=1211060, util=98.40% 00:19:29.113 nvme7n1: ios=0/19672, merge=0/0, ticks=0/1219759, in_queue=1219759, util=98.67% 00:19:29.113 nvme8n1: ios=0/7319, merge=0/0, ticks=0/1210086, in_queue=1210086, util=98.85% 00:19:29.113 nvme9n1: ios=0/6817, merge=0/0, ticks=0/1212637, in_queue=1212637, util=98.99% 00:19:29.113 14:25:33 -- target/multiconnection.sh@36 -- # sync 00:19:29.113 14:25:33 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:29.113 14:25:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.113 14:25:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:29.113 14:25:33 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:29.113 14:25:33 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.113 14:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:33 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:29.113 14:25:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:29.113 14:25:33 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:29.113 14:25:33 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:29.113 14:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:33 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:29.113 14:25:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:29.113 14:25:33 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:29.113 14:25:33 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:29.113 14:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:33 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:29.113 14:25:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:29.113 14:25:33 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:29.113 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:29.113 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:29.113 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:29.113 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:29.113 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:29.113 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:29.113 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:29.113 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:29.113 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:29.113 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:29.113 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:29.113 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:29.113 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:29.113 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:29.113 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:29.113 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.113 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.113 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.113 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.113 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:29.113 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:29.113 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:29.113 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.113 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:29.113 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.113 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:29.113 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.114 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.114 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.114 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.114 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:29.114 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:29.114 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:29.114 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.114 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.114 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:29.114 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.114 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:29.114 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.114 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:29.114 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.114 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.114 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.114 14:25:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.114 14:25:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:29.114 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:29.114 14:25:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:29.114 14:25:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:29.114 14:25:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:29.114 14:25:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:29.114 14:25:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:29.114 14:25:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:29.114 14:25:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:29.114 14:25:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:29.114 14:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.114 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.114 14:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.114 14:25:34 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:29.114 14:25:34 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:29.114 14:25:34 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:29.114 14:25:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:29.114 14:25:34 -- nvmf/common.sh@116 -- # sync 00:19:29.114 14:25:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:29.114 14:25:34 -- nvmf/common.sh@119 -- # set +e 00:19:29.114 14:25:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:29.114 14:25:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:29.114 rmmod nvme_tcp 00:19:29.114 rmmod nvme_fabrics 00:19:29.114 rmmod nvme_keyring 00:19:29.372 14:25:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:29.372 14:25:34 -- nvmf/common.sh@123 -- # set -e 00:19:29.372 14:25:34 -- nvmf/common.sh@124 -- # return 0 00:19:29.372 14:25:34 -- nvmf/common.sh@477 -- # '[' -n 90734 ']' 00:19:29.372 14:25:34 -- nvmf/common.sh@478 -- # killprocess 90734 00:19:29.372 14:25:34 -- common/autotest_common.sh@936 -- # '[' -z 90734 ']' 00:19:29.372 14:25:34 -- common/autotest_common.sh@940 -- # kill -0 90734 00:19:29.372 14:25:34 -- common/autotest_common.sh@941 -- # uname 00:19:29.372 14:25:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:29.372 14:25:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90734 00:19:29.373 killing process with pid 90734 00:19:29.373 14:25:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:29.373 14:25:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:29.373 14:25:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90734' 00:19:29.373 14:25:34 -- common/autotest_common.sh@955 -- # kill 90734 00:19:29.373 14:25:34 -- common/autotest_common.sh@960 -- # wait 90734 00:19:29.938 14:25:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:29.938 14:25:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:29.938 14:25:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:29.938 14:25:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.938 14:25:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:29.938 14:25:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.938 14:25:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.938 14:25:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.938 14:25:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:29.938 00:19:29.938 real 0m49.990s 00:19:29.938 user 2m45.667s 00:19:29.938 sys 0m24.759s 00:19:29.938 ************************************ 00:19:29.938 END TEST nvmf_multiconnection 00:19:29.938 ************************************ 00:19:29.938 14:25:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:29.938 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:19:29.938 14:25:35 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:29.938 14:25:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:29.938 14:25:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:29.938 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:19:29.938 ************************************ 00:19:29.938 START TEST nvmf_initiator_timeout 00:19:29.938 ************************************ 00:19:29.938 14:25:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:29.938 * Looking for test storage... 00:19:29.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:29.938 14:25:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:29.938 14:25:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:29.938 14:25:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:29.938 14:25:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:29.938 14:25:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:29.938 14:25:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:29.938 14:25:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:29.938 14:25:35 -- scripts/common.sh@335 -- # IFS=.-: 00:19:29.938 14:25:35 -- scripts/common.sh@335 -- # read -ra ver1 00:19:29.938 14:25:35 -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.938 14:25:35 -- scripts/common.sh@336 -- # read -ra ver2 00:19:29.938 14:25:35 -- scripts/common.sh@337 -- # local 'op=<' 00:19:29.938 14:25:35 -- scripts/common.sh@339 -- # ver1_l=2 00:19:29.938 14:25:35 -- scripts/common.sh@340 -- # ver2_l=1 00:19:29.938 14:25:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:29.938 14:25:35 -- scripts/common.sh@343 -- # case "$op" in 00:19:29.938 14:25:35 -- scripts/common.sh@344 -- # : 1 00:19:29.938 14:25:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:29.938 14:25:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.938 14:25:35 -- scripts/common.sh@364 -- # decimal 1 00:19:29.938 14:25:35 -- scripts/common.sh@352 -- # local d=1 00:19:29.938 14:25:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.938 14:25:35 -- scripts/common.sh@354 -- # echo 1 00:19:29.938 14:25:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:29.938 14:25:35 -- scripts/common.sh@365 -- # decimal 2 00:19:29.938 14:25:35 -- scripts/common.sh@352 -- # local d=2 00:19:29.938 14:25:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.938 14:25:35 -- scripts/common.sh@354 -- # echo 2 00:19:29.938 14:25:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:29.938 14:25:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:29.939 14:25:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:29.939 14:25:35 -- scripts/common.sh@367 -- # return 0 00:19:29.939 14:25:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.939 14:25:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:29.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.939 --rc genhtml_branch_coverage=1 00:19:29.939 --rc genhtml_function_coverage=1 00:19:29.939 --rc genhtml_legend=1 00:19:29.939 --rc geninfo_all_blocks=1 00:19:29.939 --rc geninfo_unexecuted_blocks=1 00:19:29.939 00:19:29.939 ' 00:19:29.939 14:25:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:29.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.939 --rc genhtml_branch_coverage=1 00:19:29.939 --rc genhtml_function_coverage=1 00:19:29.939 --rc genhtml_legend=1 00:19:29.939 --rc geninfo_all_blocks=1 00:19:29.939 --rc geninfo_unexecuted_blocks=1 00:19:29.939 00:19:29.939 ' 00:19:29.939 14:25:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:29.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.939 --rc genhtml_branch_coverage=1 00:19:29.939 --rc genhtml_function_coverage=1 00:19:29.939 --rc genhtml_legend=1 00:19:29.939 --rc geninfo_all_blocks=1 00:19:29.939 --rc geninfo_unexecuted_blocks=1 00:19:29.939 00:19:29.939 ' 00:19:29.939 14:25:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:29.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.939 --rc genhtml_branch_coverage=1 00:19:29.939 --rc genhtml_function_coverage=1 00:19:29.939 --rc genhtml_legend=1 00:19:29.939 --rc geninfo_all_blocks=1 00:19:29.939 --rc geninfo_unexecuted_blocks=1 00:19:29.939 00:19:29.939 ' 00:19:29.939 14:25:35 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.939 14:25:35 -- nvmf/common.sh@7 -- # uname -s 00:19:29.939 14:25:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.939 14:25:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.939 14:25:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.939 14:25:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.939 14:25:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.939 14:25:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.939 14:25:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.939 14:25:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.939 14:25:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.939 14:25:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.939 14:25:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:19:29.939 14:25:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:19:29.939 14:25:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.939 14:25:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.939 14:25:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.939 14:25:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.939 14:25:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.939 14:25:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.939 14:25:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.939 14:25:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.939 14:25:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.939 14:25:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.939 14:25:35 -- paths/export.sh@5 -- # export PATH 00:19:29.939 14:25:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.939 14:25:35 -- nvmf/common.sh@46 -- # : 0 00:19:29.939 14:25:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:29.939 14:25:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:29.939 14:25:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:29.939 14:25:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.197 14:25:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.197 14:25:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.197 14:25:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.197 14:25:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.197 14:25:35 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.197 14:25:35 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.197 14:25:35 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:30.197 14:25:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.197 14:25:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.197 14:25:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.197 14:25:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.197 14:25:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.197 14:25:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.197 14:25:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.197 14:25:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.197 14:25:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:30.197 14:25:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:30.197 14:25:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:30.197 14:25:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:30.197 14:25:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:30.197 14:25:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:30.197 14:25:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.197 14:25:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.197 14:25:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.197 14:25:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:30.197 14:25:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.197 14:25:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.197 14:25:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.197 14:25:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.197 14:25:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.197 14:25:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.197 14:25:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.197 14:25:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.197 14:25:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:30.197 14:25:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:30.197 Cannot find device "nvmf_tgt_br" 00:19:30.197 14:25:35 -- nvmf/common.sh@154 -- # true 00:19:30.197 14:25:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.197 Cannot find device "nvmf_tgt_br2" 00:19:30.197 14:25:35 -- nvmf/common.sh@155 -- # true 00:19:30.197 14:25:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:30.197 14:25:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:30.197 Cannot find device "nvmf_tgt_br" 00:19:30.197 14:25:35 -- nvmf/common.sh@157 -- # true 00:19:30.197 14:25:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:30.197 Cannot find device "nvmf_tgt_br2" 00:19:30.197 14:25:35 -- nvmf/common.sh@158 -- # true 00:19:30.197 14:25:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:30.197 14:25:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:30.197 14:25:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.197 14:25:35 -- nvmf/common.sh@161 -- # true 00:19:30.197 14:25:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.197 14:25:35 -- nvmf/common.sh@162 -- # true 00:19:30.197 14:25:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.197 14:25:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.197 14:25:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.197 14:25:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.197 14:25:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.197 14:25:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.197 14:25:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.197 14:25:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.197 14:25:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.197 14:25:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:30.197 14:25:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:30.197 14:25:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:30.197 14:25:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:30.197 14:25:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.197 14:25:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.197 14:25:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.197 14:25:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:30.197 14:25:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:30.197 14:25:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.455 14:25:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.455 14:25:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.455 14:25:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.455 14:25:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.455 14:25:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:30.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:19:30.455 00:19:30.455 --- 10.0.0.2 ping statistics --- 00:19:30.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.455 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:30.455 14:25:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:30.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:19:30.455 00:19:30.455 --- 10.0.0.3 ping statistics --- 00:19:30.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.455 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:30.455 14:25:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:30.455 00:19:30.455 --- 10.0.0.1 ping statistics --- 00:19:30.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.455 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:30.455 14:25:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.455 14:25:35 -- nvmf/common.sh@421 -- # return 0 00:19:30.455 14:25:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:30.455 14:25:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.455 14:25:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:30.455 14:25:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:30.455 14:25:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.455 14:25:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:30.455 14:25:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:30.455 14:25:35 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:30.455 14:25:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:30.455 14:25:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.455 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:19:30.455 14:25:35 -- nvmf/common.sh@469 -- # nvmfpid=91819 00:19:30.455 14:25:35 -- nvmf/common.sh@470 -- # waitforlisten 91819 00:19:30.455 14:25:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:30.455 14:25:35 -- common/autotest_common.sh@829 -- # '[' -z 91819 ']' 00:19:30.455 14:25:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.455 14:25:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.455 14:25:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.455 14:25:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.455 14:25:35 -- common/autotest_common.sh@10 -- # set +x 00:19:30.455 [2024-12-05 14:25:35.972987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:30.455 [2024-12-05 14:25:35.973451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.713 [2024-12-05 14:25:36.106245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.713 [2024-12-05 14:25:36.165226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:30.713 [2024-12-05 14:25:36.165392] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.713 [2024-12-05 14:25:36.165407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.713 [2024-12-05 14:25:36.165415] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.713 [2024-12-05 14:25:36.165583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.713 [2024-12-05 14:25:36.165994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.713 [2024-12-05 14:25:36.166732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.713 [2024-12-05 14:25:36.166764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.647 14:25:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.647 14:25:37 -- common/autotest_common.sh@862 -- # return 0 00:19:31.647 14:25:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:31.647 14:25:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 14:25:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.647 14:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 Malloc0 00:19:31.647 14:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:31.647 14:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 Delay0 00:19:31.647 14:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.647 14:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 [2024-12-05 14:25:37.123344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.647 14:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:31.647 14:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 14:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:31.647 14:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 14:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.647 14:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.647 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 [2024-12-05 14:25:37.155589] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.647 14:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.647 14:25:37 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:31.905 14:25:37 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:31.905 14:25:37 -- common/autotest_common.sh@1187 -- # local i=0 00:19:31.905 14:25:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:31.905 14:25:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:31.905 14:25:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:33.803 14:25:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:33.803 14:25:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:33.803 14:25:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:33.803 14:25:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:33.803 14:25:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:33.803 14:25:39 -- common/autotest_common.sh@1197 -- # return 0 00:19:33.803 14:25:39 -- target/initiator_timeout.sh@35 -- # fio_pid=91901 00:19:33.803 14:25:39 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:33.803 14:25:39 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:33.803 [global] 00:19:33.803 thread=1 00:19:33.803 invalidate=1 00:19:33.803 rw=write 00:19:33.803 time_based=1 00:19:33.803 runtime=60 00:19:33.804 ioengine=libaio 00:19:33.804 direct=1 00:19:33.804 bs=4096 00:19:33.804 iodepth=1 00:19:33.804 norandommap=0 00:19:33.804 numjobs=1 00:19:33.804 00:19:33.804 verify_dump=1 00:19:33.804 verify_backlog=512 00:19:33.804 verify_state_save=0 00:19:33.804 do_verify=1 00:19:33.804 verify=crc32c-intel 00:19:33.804 [job0] 00:19:33.804 filename=/dev/nvme0n1 00:19:33.804 Could not set queue depth (nvme0n1) 00:19:34.062 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.062 fio-3.35 00:19:34.062 Starting 1 thread 00:19:37.340 14:25:42 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:37.340 14:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.340 14:25:42 -- common/autotest_common.sh@10 -- # set +x 00:19:37.340 true 00:19:37.340 14:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.340 14:25:42 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:37.340 14:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.340 14:25:42 -- common/autotest_common.sh@10 -- # set +x 00:19:37.340 true 00:19:37.340 14:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.340 14:25:42 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:37.340 14:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.340 14:25:42 -- common/autotest_common.sh@10 -- # set +x 00:19:37.340 true 00:19:37.340 14:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.340 14:25:42 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:37.340 14:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.340 14:25:42 -- common/autotest_common.sh@10 -- # set +x 00:19:37.340 true 00:19:37.340 14:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.340 14:25:42 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:39.891 14:25:45 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:39.891 14:25:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.891 14:25:45 -- common/autotest_common.sh@10 -- # set +x 00:19:39.891 true 00:19:39.891 14:25:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.891 14:25:45 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:39.891 14:25:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.891 14:25:45 -- common/autotest_common.sh@10 -- # set +x 00:19:39.891 true 00:19:39.891 14:25:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.891 14:25:45 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:39.891 14:25:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.891 14:25:45 -- common/autotest_common.sh@10 -- # set +x 00:19:39.891 true 00:19:39.891 14:25:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.891 14:25:45 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:39.891 14:25:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.891 14:25:45 -- common/autotest_common.sh@10 -- # set +x 00:19:39.891 true 00:19:39.891 14:25:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.891 14:25:45 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:39.891 14:25:45 -- target/initiator_timeout.sh@54 -- # wait 91901 00:20:36.124 00:20:36.124 job0: (groupid=0, jobs=1): err= 0: pid=91923: Thu Dec 5 14:26:39 2024 00:20:36.124 read: IOPS=770, BW=3083KiB/s (3157kB/s)(181MiB/60001msec) 00:20:36.124 slat (nsec): min=11746, max=76721, avg=13823.65, stdev=3318.96 00:20:36.124 clat (usec): min=153, max=907, avg=209.87, stdev=21.87 00:20:36.124 lat (usec): min=166, max=922, avg=223.70, stdev=22.44 00:20:36.124 clat percentiles (usec): 00:20:36.124 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:20:36.124 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:20:36.124 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:20:36.124 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 355], 99.95th=[ 449], 00:20:36.124 | 99.99th=[ 701] 00:20:36.124 write: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60001msec); 0 zone resets 00:20:36.124 slat (usec): min=14, max=10406, avg=21.34, stdev=60.45 00:20:36.124 clat (usec): min=120, max=40860k, avg=1041.28, stdev=189295.97 00:20:36.124 lat (usec): min=139, max=40860k, avg=1062.61, stdev=189295.98 00:20:36.124 clat percentiles (usec): 00:20:36.124 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:20:36.124 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:20:36.124 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:20:36.124 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 289], 99.95th=[ 334], 00:20:36.124 | 99.99th=[ 619] 00:20:36.124 bw ( KiB/s): min= 4496, max=12288, per=100.00%, avg=9593.26, stdev=1523.90, samples=38 00:20:36.124 iops : min= 1124, max= 3072, avg=2398.32, stdev=380.98, samples=38 00:20:36.124 lat (usec) : 250=98.09%, 500=1.88%, 750=0.02%, 1000=0.01% 00:20:36.124 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:36.124 cpu : usr=0.56%, sys=1.94%, ctx=92851, majf=0, minf=5 00:20:36.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.124 issued rwts: total=46250,46592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:36.124 00:20:36.124 Run status group 0 (all jobs): 00:20:36.124 READ: bw=3083KiB/s (3157kB/s), 3083KiB/s-3083KiB/s (3157kB/s-3157kB/s), io=181MiB (189MB), run=60001-60001msec 00:20:36.124 WRITE: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60001-60001msec 00:20:36.124 00:20:36.124 Disk stats (read/write): 00:20:36.124 nvme0n1: ios=46346/46241, merge=0/0, ticks=10072/8124, in_queue=18196, util=99.57% 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:36.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:36.124 14:26:39 -- common/autotest_common.sh@1208 -- # local i=0 00:20:36.124 14:26:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:36.124 14:26:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:36.124 14:26:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:36.124 14:26:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:36.124 nvmf hotplug test: fio successful as expected 00:20:36.124 14:26:39 -- common/autotest_common.sh@1220 -- # return 0 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.124 14:26:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.124 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:20:36.124 14:26:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:36.124 14:26:39 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:36.124 14:26:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:36.124 14:26:39 -- nvmf/common.sh@116 -- # sync 00:20:36.124 14:26:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:36.124 14:26:39 -- nvmf/common.sh@119 -- # set +e 00:20:36.124 14:26:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:36.124 14:26:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:36.124 rmmod nvme_tcp 00:20:36.124 rmmod nvme_fabrics 00:20:36.124 rmmod nvme_keyring 00:20:36.124 14:26:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:36.124 14:26:39 -- nvmf/common.sh@123 -- # set -e 00:20:36.124 14:26:39 -- nvmf/common.sh@124 -- # return 0 00:20:36.124 14:26:39 -- nvmf/common.sh@477 -- # '[' -n 91819 ']' 00:20:36.124 14:26:39 -- nvmf/common.sh@478 -- # killprocess 91819 00:20:36.124 14:26:39 -- common/autotest_common.sh@936 -- # '[' -z 91819 ']' 00:20:36.124 14:26:39 -- common/autotest_common.sh@940 -- # kill -0 91819 00:20:36.124 14:26:39 -- common/autotest_common.sh@941 -- # uname 00:20:36.124 14:26:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.124 14:26:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91819 00:20:36.124 killing process with pid 91819 00:20:36.124 14:26:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:36.124 14:26:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:36.124 14:26:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91819' 00:20:36.124 14:26:39 -- common/autotest_common.sh@955 -- # kill 91819 00:20:36.124 14:26:39 -- common/autotest_common.sh@960 -- # wait 91819 00:20:36.124 14:26:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:36.124 14:26:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:36.124 14:26:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:36.124 14:26:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.124 14:26:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:36.124 14:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.124 14:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.124 14:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.124 14:26:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:36.124 00:20:36.124 real 1m4.851s 00:20:36.124 user 4m7.586s 00:20:36.124 sys 0m8.203s 00:20:36.124 14:26:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:36.124 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:20:36.124 ************************************ 00:20:36.124 END TEST nvmf_initiator_timeout 00:20:36.124 ************************************ 00:20:36.124 14:26:40 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:36.124 14:26:40 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:36.124 14:26:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.124 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:20:36.124 14:26:40 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:36.124 14:26:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.124 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:20:36.124 14:26:40 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:36.124 14:26:40 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:36.124 14:26:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:36.124 14:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.124 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:20:36.124 ************************************ 00:20:36.124 START TEST nvmf_multicontroller 00:20:36.124 ************************************ 00:20:36.124 14:26:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:36.124 * Looking for test storage... 00:20:36.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:36.124 14:26:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:36.124 14:26:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:36.124 14:26:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:36.124 14:26:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:36.124 14:26:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:36.124 14:26:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:36.124 14:26:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:36.124 14:26:40 -- scripts/common.sh@335 -- # IFS=.-: 00:20:36.124 14:26:40 -- scripts/common.sh@335 -- # read -ra ver1 00:20:36.124 14:26:40 -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.124 14:26:40 -- scripts/common.sh@336 -- # read -ra ver2 00:20:36.124 14:26:40 -- scripts/common.sh@337 -- # local 'op=<' 00:20:36.124 14:26:40 -- scripts/common.sh@339 -- # ver1_l=2 00:20:36.124 14:26:40 -- scripts/common.sh@340 -- # ver2_l=1 00:20:36.124 14:26:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:36.124 14:26:40 -- scripts/common.sh@343 -- # case "$op" in 00:20:36.124 14:26:40 -- scripts/common.sh@344 -- # : 1 00:20:36.124 14:26:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:36.124 14:26:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.124 14:26:40 -- scripts/common.sh@364 -- # decimal 1 00:20:36.124 14:26:40 -- scripts/common.sh@352 -- # local d=1 00:20:36.124 14:26:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.124 14:26:40 -- scripts/common.sh@354 -- # echo 1 00:20:36.124 14:26:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:36.124 14:26:40 -- scripts/common.sh@365 -- # decimal 2 00:20:36.124 14:26:40 -- scripts/common.sh@352 -- # local d=2 00:20:36.124 14:26:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.124 14:26:40 -- scripts/common.sh@354 -- # echo 2 00:20:36.124 14:26:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:36.124 14:26:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:36.125 14:26:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:36.125 14:26:40 -- scripts/common.sh@367 -- # return 0 00:20:36.125 14:26:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.125 14:26:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.125 --rc genhtml_branch_coverage=1 00:20:36.125 --rc genhtml_function_coverage=1 00:20:36.125 --rc genhtml_legend=1 00:20:36.125 --rc geninfo_all_blocks=1 00:20:36.125 --rc geninfo_unexecuted_blocks=1 00:20:36.125 00:20:36.125 ' 00:20:36.125 14:26:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.125 --rc genhtml_branch_coverage=1 00:20:36.125 --rc genhtml_function_coverage=1 00:20:36.125 --rc genhtml_legend=1 00:20:36.125 --rc geninfo_all_blocks=1 00:20:36.125 --rc geninfo_unexecuted_blocks=1 00:20:36.125 00:20:36.125 ' 00:20:36.125 14:26:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.125 --rc genhtml_branch_coverage=1 00:20:36.125 --rc genhtml_function_coverage=1 00:20:36.125 --rc genhtml_legend=1 00:20:36.125 --rc geninfo_all_blocks=1 00:20:36.125 --rc geninfo_unexecuted_blocks=1 00:20:36.125 00:20:36.125 ' 00:20:36.125 14:26:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.125 --rc genhtml_branch_coverage=1 00:20:36.125 --rc genhtml_function_coverage=1 00:20:36.125 --rc genhtml_legend=1 00:20:36.125 --rc geninfo_all_blocks=1 00:20:36.125 --rc geninfo_unexecuted_blocks=1 00:20:36.125 00:20:36.125 ' 00:20:36.125 14:26:40 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.125 14:26:40 -- nvmf/common.sh@7 -- # uname -s 00:20:36.125 14:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.125 14:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.125 14:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.125 14:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.125 14:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.125 14:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.125 14:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.125 14:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.125 14:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.125 14:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.125 14:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:36.125 14:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:36.125 14:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.125 14:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.125 14:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.125 14:26:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.125 14:26:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.125 14:26:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.125 14:26:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.125 14:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.125 14:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.125 14:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.125 14:26:40 -- paths/export.sh@5 -- # export PATH 00:20:36.125 14:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.125 14:26:40 -- nvmf/common.sh@46 -- # : 0 00:20:36.125 14:26:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:36.125 14:26:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:36.125 14:26:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:36.125 14:26:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.125 14:26:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.125 14:26:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:36.125 14:26:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:36.125 14:26:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:36.125 14:26:40 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.125 14:26:40 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.125 14:26:40 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:36.125 14:26:40 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:36.125 14:26:40 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.125 14:26:40 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:36.125 14:26:40 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:36.125 14:26:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:36.125 14:26:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.125 14:26:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:36.125 14:26:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:36.125 14:26:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:36.125 14:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.125 14:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.125 14:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.125 14:26:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:36.125 14:26:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:36.125 14:26:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:36.125 14:26:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:36.125 14:26:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:36.125 14:26:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:36.125 14:26:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.125 14:26:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.125 14:26:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:36.125 14:26:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:36.125 14:26:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.125 14:26:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.125 14:26:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.125 14:26:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.125 14:26:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.125 14:26:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.125 14:26:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.125 14:26:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.125 14:26:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:36.125 14:26:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:36.125 Cannot find device "nvmf_tgt_br" 00:20:36.125 14:26:40 -- nvmf/common.sh@154 -- # true 00:20:36.125 14:26:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.125 Cannot find device "nvmf_tgt_br2" 00:20:36.125 14:26:40 -- nvmf/common.sh@155 -- # true 00:20:36.125 14:26:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:36.125 14:26:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:36.125 Cannot find device "nvmf_tgt_br" 00:20:36.125 14:26:40 -- nvmf/common.sh@157 -- # true 00:20:36.125 14:26:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:36.125 Cannot find device "nvmf_tgt_br2" 00:20:36.125 14:26:40 -- nvmf/common.sh@158 -- # true 00:20:36.125 14:26:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:36.125 14:26:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:36.125 14:26:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.125 14:26:40 -- nvmf/common.sh@161 -- # true 00:20:36.125 14:26:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.125 14:26:40 -- nvmf/common.sh@162 -- # true 00:20:36.125 14:26:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.125 14:26:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.125 14:26:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.125 14:26:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.125 14:26:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.125 14:26:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.125 14:26:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.125 14:26:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:36.125 14:26:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:36.125 14:26:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:36.125 14:26:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:36.125 14:26:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:36.125 14:26:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:36.125 14:26:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.125 14:26:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.126 14:26:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.126 14:26:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:36.126 14:26:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:36.126 14:26:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.126 14:26:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.126 14:26:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.126 14:26:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.126 14:26:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.126 14:26:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:36.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:20:36.126 00:20:36.126 --- 10.0.0.2 ping statistics --- 00:20:36.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.126 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:36.126 14:26:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:36.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:36.126 00:20:36.126 --- 10.0.0.3 ping statistics --- 00:20:36.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.126 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:36.126 14:26:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:36.126 00:20:36.126 --- 10.0.0.1 ping statistics --- 00:20:36.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.126 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:36.126 14:26:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.126 14:26:40 -- nvmf/common.sh@421 -- # return 0 00:20:36.126 14:26:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:36.126 14:26:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.126 14:26:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:36.126 14:26:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:36.126 14:26:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.126 14:26:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:36.126 14:26:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:36.126 14:26:40 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:36.126 14:26:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:36.126 14:26:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.126 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:20:36.126 14:26:40 -- nvmf/common.sh@469 -- # nvmfpid=92770 00:20:36.126 14:26:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:36.126 14:26:40 -- nvmf/common.sh@470 -- # waitforlisten 92770 00:20:36.126 14:26:40 -- common/autotest_common.sh@829 -- # '[' -z 92770 ']' 00:20:36.126 14:26:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.126 14:26:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.126 14:26:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.126 14:26:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.126 14:26:40 -- common/autotest_common.sh@10 -- # set +x 00:20:36.126 [2024-12-05 14:26:40.918963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:36.126 [2024-12-05 14:26:40.919039] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.126 [2024-12-05 14:26:41.045079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:36.126 [2024-12-05 14:26:41.115218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:36.126 [2024-12-05 14:26:41.115372] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.126 [2024-12-05 14:26:41.115386] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.126 [2024-12-05 14:26:41.115394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.126 [2024-12-05 14:26:41.115512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.126 [2024-12-05 14:26:41.115620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.126 [2024-12-05 14:26:41.115645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.384 14:26:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.384 14:26:41 -- common/autotest_common.sh@862 -- # return 0 00:20:36.384 14:26:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.384 14:26:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.384 14:26:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 14:26:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.384 14:26:41 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.384 14:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 [2024-12-05 14:26:41.935450] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.384 14:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.384 14:26:41 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.384 14:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 Malloc0 00:20:36.384 14:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.384 14:26:41 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.384 14:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 14:26:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.384 14:26:41 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.384 14:26:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:41 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.384 14:26:42 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.384 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 [2024-12-05 14:26:42.009257] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.384 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.384 14:26:42 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.384 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 [2024-12-05 14:26:42.017188] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.384 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.384 14:26:42 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:36.384 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.384 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.642 Malloc1 00:20:36.642 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.642 14:26:42 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:36.642 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.642 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.642 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.642 14:26:42 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:36.642 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.642 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.642 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.642 14:26:42 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:36.642 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.642 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.642 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.642 14:26:42 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:36.642 14:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.642 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:36.642 14:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.642 14:26:42 -- host/multicontroller.sh@44 -- # bdevperf_pid=92822 00:20:36.642 14:26:42 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:36.642 14:26:42 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.642 14:26:42 -- host/multicontroller.sh@47 -- # waitforlisten 92822 /var/tmp/bdevperf.sock 00:20:36.642 14:26:42 -- common/autotest_common.sh@829 -- # '[' -z 92822 ']' 00:20:36.642 14:26:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.642 14:26:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.642 14:26:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.642 14:26:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.642 14:26:42 -- common/autotest_common.sh@10 -- # set +x 00:20:37.577 14:26:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.577 14:26:43 -- common/autotest_common.sh@862 -- # return 0 00:20:37.577 14:26:43 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:37.577 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.577 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.835 NVMe0n1 00:20:37.835 14:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.835 14:26:43 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:37.835 14:26:43 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.835 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.835 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.835 14:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.835 1 00:20:37.835 14:26:43 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:37.835 14:26:43 -- common/autotest_common.sh@650 -- # local es=0 00:20:37.835 14:26:43 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:37.835 14:26:43 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:37.835 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.835 14:26:43 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:37.835 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.835 14:26:43 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:37.835 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.835 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.836 2024/12/05 14:26:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:37.836 request: 00:20:37.836 { 00:20:37.836 "method": "bdev_nvme_attach_controller", 00:20:37.836 "params": { 00:20:37.836 "name": "NVMe0", 00:20:37.836 "trtype": "tcp", 00:20:37.836 "traddr": "10.0.0.2", 00:20:37.836 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:37.836 "hostaddr": "10.0.0.2", 00:20:37.836 "hostsvcid": "60000", 00:20:37.836 "adrfam": "ipv4", 00:20:37.836 "trsvcid": "4420", 00:20:37.836 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:37.836 } 00:20:37.836 } 00:20:37.836 Got JSON-RPC error response 00:20:37.836 GoRPCClient: error on JSON-RPC call 00:20:37.836 14:26:43 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:37.836 14:26:43 -- common/autotest_common.sh@653 -- # es=1 00:20:37.836 14:26:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.836 14:26:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.836 14:26:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.836 14:26:43 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:37.836 14:26:43 -- common/autotest_common.sh@650 -- # local es=0 00:20:37.836 14:26:43 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:37.836 14:26:43 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.836 14:26:43 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:37.836 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.836 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.836 2024/12/05 14:26:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:37.836 request: 00:20:37.836 { 00:20:37.836 "method": "bdev_nvme_attach_controller", 00:20:37.836 "params": { 00:20:37.836 "name": "NVMe0", 00:20:37.836 "trtype": "tcp", 00:20:37.836 "traddr": "10.0.0.2", 00:20:37.836 "hostaddr": "10.0.0.2", 00:20:37.836 "hostsvcid": "60000", 00:20:37.836 "adrfam": "ipv4", 00:20:37.836 "trsvcid": "4420", 00:20:37.836 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:37.836 } 00:20:37.836 } 00:20:37.836 Got JSON-RPC error response 00:20:37.836 GoRPCClient: error on JSON-RPC call 00:20:37.836 14:26:43 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:37.836 14:26:43 -- common/autotest_common.sh@653 -- # es=1 00:20:37.836 14:26:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.836 14:26:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.836 14:26:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.836 14:26:43 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:37.836 14:26:43 -- common/autotest_common.sh@650 -- # local es=0 00:20:37.836 14:26:43 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:37.836 14:26:43 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.836 14:26:43 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:37.836 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.836 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.836 2024/12/05 14:26:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:37.836 request: 00:20:37.836 { 00:20:37.836 "method": "bdev_nvme_attach_controller", 00:20:37.836 "params": { 00:20:37.836 "name": "NVMe0", 00:20:37.836 "trtype": "tcp", 00:20:37.836 "traddr": "10.0.0.2", 00:20:37.836 "hostaddr": "10.0.0.2", 00:20:37.836 "hostsvcid": "60000", 00:20:37.836 "adrfam": "ipv4", 00:20:37.836 "trsvcid": "4420", 00:20:37.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.836 "multipath": "disable" 00:20:37.836 } 00:20:37.836 } 00:20:37.836 Got JSON-RPC error response 00:20:37.836 GoRPCClient: error on JSON-RPC call 00:20:37.836 14:26:43 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:37.836 14:26:43 -- common/autotest_common.sh@653 -- # es=1 00:20:37.836 14:26:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.836 14:26:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.836 14:26:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.836 14:26:43 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:37.836 14:26:43 -- common/autotest_common.sh@650 -- # local es=0 00:20:37.836 14:26:43 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:37.836 14:26:43 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:37.836 14:26:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.836 14:26:43 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:37.836 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.836 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.836 2024/12/05 14:26:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:37.836 request: 00:20:37.836 { 00:20:37.836 "method": "bdev_nvme_attach_controller", 00:20:37.836 "params": { 00:20:37.836 "name": "NVMe0", 00:20:37.836 "trtype": "tcp", 00:20:37.836 "traddr": "10.0.0.2", 00:20:37.836 "hostaddr": "10.0.0.2", 00:20:37.836 "hostsvcid": "60000", 00:20:37.836 "adrfam": "ipv4", 00:20:37.836 "trsvcid": "4420", 00:20:37.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.836 "multipath": "failover" 00:20:37.836 } 00:20:37.837 } 00:20:37.837 Got JSON-RPC error response 00:20:37.837 GoRPCClient: error on JSON-RPC call 00:20:37.837 14:26:43 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:37.837 14:26:43 -- common/autotest_common.sh@653 -- # es=1 00:20:37.837 14:26:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.837 14:26:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.837 14:26:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.837 14:26:43 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.837 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.837 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.837 00:20:37.837 14:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.837 14:26:43 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.837 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.837 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.837 14:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.837 14:26:43 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:37.837 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.837 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 00:20:38.096 14:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.096 14:26:43 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.096 14:26:43 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:38.096 14:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.096 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:20:38.096 14:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.096 14:26:43 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:38.096 14:26:43 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.032 0 00:20:39.032 14:26:44 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:39.032 14:26:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.032 14:26:44 -- common/autotest_common.sh@10 -- # set +x 00:20:39.032 14:26:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.032 14:26:44 -- host/multicontroller.sh@100 -- # killprocess 92822 00:20:39.032 14:26:44 -- common/autotest_common.sh@936 -- # '[' -z 92822 ']' 00:20:39.032 14:26:44 -- common/autotest_common.sh@940 -- # kill -0 92822 00:20:39.032 14:26:44 -- common/autotest_common.sh@941 -- # uname 00:20:39.032 14:26:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.032 14:26:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92822 00:20:39.291 killing process with pid 92822 00:20:39.291 14:26:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:39.291 14:26:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:39.291 14:26:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92822' 00:20:39.291 14:26:44 -- common/autotest_common.sh@955 -- # kill 92822 00:20:39.291 14:26:44 -- common/autotest_common.sh@960 -- # wait 92822 00:20:39.291 14:26:44 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.291 14:26:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.291 14:26:44 -- common/autotest_common.sh@10 -- # set +x 00:20:39.291 14:26:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.291 14:26:44 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:39.291 14:26:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.291 14:26:44 -- common/autotest_common.sh@10 -- # set +x 00:20:39.291 14:26:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.291 14:26:44 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:39.291 14:26:44 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:39.291 14:26:44 -- common/autotest_common.sh@1607 -- # read -r file 00:20:39.291 14:26:44 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:39.291 14:26:44 -- common/autotest_common.sh@1606 -- # sort -u 00:20:39.291 14:26:44 -- common/autotest_common.sh@1608 -- # cat 00:20:39.291 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:39.291 [2024-12-05 14:26:42.132895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:39.291 [2024-12-05 14:26:42.132989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92822 ] 00:20:39.291 [2024-12-05 14:26:42.268562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.291 [2024-12-05 14:26:42.336731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.291 [2024-12-05 14:26:43.483800] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 3aa3b360-9d0b-4858-b397-2f6bd34c278a already exists 00:20:39.291 [2024-12-05 14:26:43.483858] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:3aa3b360-9d0b-4858-b397-2f6bd34c278a alias for bdev NVMe1n1 00:20:39.291 [2024-12-05 14:26:43.483894] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:39.291 Running I/O for 1 seconds... 00:20:39.291 00:20:39.291 Latency(us) 00:20:39.291 [2024-12-05T14:26:44.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.291 [2024-12-05T14:26:44.939Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:39.291 NVMe0n1 : 1.00 22852.24 89.27 0.00 0.00 5594.54 3202.33 12988.04 00:20:39.291 [2024-12-05T14:26:44.939Z] =================================================================================================================== 00:20:39.291 [2024-12-05T14:26:44.939Z] Total : 22852.24 89.27 0.00 0.00 5594.54 3202.33 12988.04 00:20:39.291 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.291 00:20:39.291 Latency(us) 00:20:39.291 [2024-12-05T14:26:44.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.291 [2024-12-05T14:26:44.939Z] =================================================================================================================== 00:20:39.291 [2024-12-05T14:26:44.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.291 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:39.291 14:26:44 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:39.291 14:26:44 -- common/autotest_common.sh@1607 -- # read -r file 00:20:39.291 14:26:44 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:39.291 14:26:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:39.291 14:26:44 -- nvmf/common.sh@116 -- # sync 00:20:39.550 14:26:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:39.550 14:26:45 -- nvmf/common.sh@119 -- # set +e 00:20:39.550 14:26:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:39.550 14:26:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:39.550 rmmod nvme_tcp 00:20:39.550 rmmod nvme_fabrics 00:20:39.550 rmmod nvme_keyring 00:20:39.550 14:26:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:39.550 14:26:45 -- nvmf/common.sh@123 -- # set -e 00:20:39.550 14:26:45 -- nvmf/common.sh@124 -- # return 0 00:20:39.550 14:26:45 -- nvmf/common.sh@477 -- # '[' -n 92770 ']' 00:20:39.550 14:26:45 -- nvmf/common.sh@478 -- # killprocess 92770 00:20:39.550 14:26:45 -- common/autotest_common.sh@936 -- # '[' -z 92770 ']' 00:20:39.550 14:26:45 -- common/autotest_common.sh@940 -- # kill -0 92770 00:20:39.550 14:26:45 -- common/autotest_common.sh@941 -- # uname 00:20:39.550 14:26:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.550 14:26:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92770 00:20:39.550 14:26:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.550 14:26:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.550 killing process with pid 92770 00:20:39.550 14:26:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92770' 00:20:39.550 14:26:45 -- common/autotest_common.sh@955 -- # kill 92770 00:20:39.550 14:26:45 -- common/autotest_common.sh@960 -- # wait 92770 00:20:39.808 14:26:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:39.808 14:26:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:39.808 14:26:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:39.808 14:26:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.808 14:26:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:39.808 14:26:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.808 14:26:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.808 14:26:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.066 14:26:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:40.066 ************************************ 00:20:40.066 END TEST nvmf_multicontroller 00:20:40.066 ************************************ 00:20:40.066 00:20:40.066 real 0m5.157s 00:20:40.066 user 0m16.148s 00:20:40.066 sys 0m1.128s 00:20:40.066 14:26:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:40.066 14:26:45 -- common/autotest_common.sh@10 -- # set +x 00:20:40.066 14:26:45 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:40.066 14:26:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:40.066 14:26:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.066 14:26:45 -- common/autotest_common.sh@10 -- # set +x 00:20:40.066 ************************************ 00:20:40.066 START TEST nvmf_aer 00:20:40.066 ************************************ 00:20:40.066 14:26:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:40.066 * Looking for test storage... 00:20:40.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.066 14:26:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:40.066 14:26:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:40.066 14:26:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:40.066 14:26:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:40.066 14:26:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:40.066 14:26:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:40.066 14:26:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:40.066 14:26:45 -- scripts/common.sh@335 -- # IFS=.-: 00:20:40.066 14:26:45 -- scripts/common.sh@335 -- # read -ra ver1 00:20:40.066 14:26:45 -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.066 14:26:45 -- scripts/common.sh@336 -- # read -ra ver2 00:20:40.066 14:26:45 -- scripts/common.sh@337 -- # local 'op=<' 00:20:40.066 14:26:45 -- scripts/common.sh@339 -- # ver1_l=2 00:20:40.066 14:26:45 -- scripts/common.sh@340 -- # ver2_l=1 00:20:40.066 14:26:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:40.066 14:26:45 -- scripts/common.sh@343 -- # case "$op" in 00:20:40.066 14:26:45 -- scripts/common.sh@344 -- # : 1 00:20:40.066 14:26:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:40.066 14:26:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.066 14:26:45 -- scripts/common.sh@364 -- # decimal 1 00:20:40.066 14:26:45 -- scripts/common.sh@352 -- # local d=1 00:20:40.066 14:26:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.066 14:26:45 -- scripts/common.sh@354 -- # echo 1 00:20:40.066 14:26:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:40.066 14:26:45 -- scripts/common.sh@365 -- # decimal 2 00:20:40.325 14:26:45 -- scripts/common.sh@352 -- # local d=2 00:20:40.325 14:26:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.325 14:26:45 -- scripts/common.sh@354 -- # echo 2 00:20:40.325 14:26:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:40.325 14:26:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:40.325 14:26:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:40.325 14:26:45 -- scripts/common.sh@367 -- # return 0 00:20:40.325 14:26:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.325 14:26:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.325 --rc genhtml_branch_coverage=1 00:20:40.325 --rc genhtml_function_coverage=1 00:20:40.325 --rc genhtml_legend=1 00:20:40.325 --rc geninfo_all_blocks=1 00:20:40.325 --rc geninfo_unexecuted_blocks=1 00:20:40.325 00:20:40.325 ' 00:20:40.325 14:26:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.325 --rc genhtml_branch_coverage=1 00:20:40.325 --rc genhtml_function_coverage=1 00:20:40.325 --rc genhtml_legend=1 00:20:40.325 --rc geninfo_all_blocks=1 00:20:40.325 --rc geninfo_unexecuted_blocks=1 00:20:40.325 00:20:40.325 ' 00:20:40.325 14:26:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.325 --rc genhtml_branch_coverage=1 00:20:40.325 --rc genhtml_function_coverage=1 00:20:40.325 --rc genhtml_legend=1 00:20:40.325 --rc geninfo_all_blocks=1 00:20:40.325 --rc geninfo_unexecuted_blocks=1 00:20:40.325 00:20:40.325 ' 00:20:40.325 14:26:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.325 --rc genhtml_branch_coverage=1 00:20:40.325 --rc genhtml_function_coverage=1 00:20:40.325 --rc genhtml_legend=1 00:20:40.325 --rc geninfo_all_blocks=1 00:20:40.325 --rc geninfo_unexecuted_blocks=1 00:20:40.325 00:20:40.325 ' 00:20:40.325 14:26:45 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.325 14:26:45 -- nvmf/common.sh@7 -- # uname -s 00:20:40.325 14:26:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.325 14:26:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.325 14:26:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.325 14:26:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.325 14:26:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.325 14:26:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.325 14:26:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.325 14:26:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.325 14:26:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.325 14:26:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.325 14:26:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:40.325 14:26:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:40.325 14:26:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.325 14:26:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.325 14:26:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.325 14:26:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.325 14:26:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.325 14:26:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.325 14:26:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.326 14:26:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.326 14:26:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.326 14:26:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.326 14:26:45 -- paths/export.sh@5 -- # export PATH 00:20:40.326 14:26:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.326 14:26:45 -- nvmf/common.sh@46 -- # : 0 00:20:40.326 14:26:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:40.326 14:26:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:40.326 14:26:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:40.326 14:26:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.326 14:26:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.326 14:26:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:40.326 14:26:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:40.326 14:26:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:40.326 14:26:45 -- host/aer.sh@11 -- # nvmftestinit 00:20:40.326 14:26:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:40.326 14:26:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.326 14:26:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:40.326 14:26:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:40.326 14:26:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:40.326 14:26:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.326 14:26:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.326 14:26:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.326 14:26:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:40.326 14:26:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:40.326 14:26:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:40.326 14:26:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:40.326 14:26:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:40.326 14:26:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:40.326 14:26:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.326 14:26:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.326 14:26:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:40.326 14:26:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:40.326 14:26:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.326 14:26:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.326 14:26:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.326 14:26:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.326 14:26:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.326 14:26:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.326 14:26:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.326 14:26:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.326 14:26:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:40.326 14:26:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:40.326 Cannot find device "nvmf_tgt_br" 00:20:40.326 14:26:45 -- nvmf/common.sh@154 -- # true 00:20:40.326 14:26:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.326 Cannot find device "nvmf_tgt_br2" 00:20:40.326 14:26:45 -- nvmf/common.sh@155 -- # true 00:20:40.326 14:26:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:40.326 14:26:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:40.326 Cannot find device "nvmf_tgt_br" 00:20:40.326 14:26:45 -- nvmf/common.sh@157 -- # true 00:20:40.326 14:26:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:40.326 Cannot find device "nvmf_tgt_br2" 00:20:40.326 14:26:45 -- nvmf/common.sh@158 -- # true 00:20:40.326 14:26:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:40.326 14:26:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:40.326 14:26:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.326 14:26:45 -- nvmf/common.sh@161 -- # true 00:20:40.326 14:26:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.326 14:26:45 -- nvmf/common.sh@162 -- # true 00:20:40.326 14:26:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.326 14:26:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.326 14:26:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.326 14:26:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.326 14:26:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.326 14:26:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.326 14:26:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.326 14:26:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.326 14:26:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.326 14:26:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:40.326 14:26:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:40.326 14:26:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:40.326 14:26:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:40.326 14:26:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.326 14:26:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.326 14:26:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.590 14:26:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:40.590 14:26:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:40.590 14:26:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.590 14:26:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.590 14:26:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.590 14:26:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.590 14:26:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.590 14:26:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:40.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:20:40.590 00:20:40.590 --- 10.0.0.2 ping statistics --- 00:20:40.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.590 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:40.590 14:26:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:40.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:40.590 00:20:40.590 --- 10.0.0.3 ping statistics --- 00:20:40.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.590 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:40.590 14:26:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:20:40.591 00:20:40.591 --- 10.0.0.1 ping statistics --- 00:20:40.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.591 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:20:40.591 14:26:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.591 14:26:46 -- nvmf/common.sh@421 -- # return 0 00:20:40.591 14:26:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.591 14:26:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.591 14:26:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.591 14:26:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.591 14:26:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.591 14:26:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.591 14:26:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:40.591 14:26:46 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:40.591 14:26:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.591 14:26:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.591 14:26:46 -- common/autotest_common.sh@10 -- # set +x 00:20:40.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.591 14:26:46 -- nvmf/common.sh@469 -- # nvmfpid=93079 00:20:40.591 14:26:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.591 14:26:46 -- nvmf/common.sh@470 -- # waitforlisten 93079 00:20:40.591 14:26:46 -- common/autotest_common.sh@829 -- # '[' -z 93079 ']' 00:20:40.591 14:26:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.591 14:26:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.591 14:26:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.591 14:26:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.591 14:26:46 -- common/autotest_common.sh@10 -- # set +x 00:20:40.591 [2024-12-05 14:26:46.120104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.591 [2024-12-05 14:26:46.120297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.850 [2024-12-05 14:26:46.253730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.850 [2024-12-05 14:26:46.325359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.850 [2024-12-05 14:26:46.325679] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.850 [2024-12-05 14:26:46.325791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.850 [2024-12-05 14:26:46.325966] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.850 [2024-12-05 14:26:46.326286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.850 [2024-12-05 14:26:46.326411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.850 [2024-12-05 14:26:46.326494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.850 [2024-12-05 14:26:46.326494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.786 14:26:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.786 14:26:47 -- common/autotest_common.sh@862 -- # return 0 00:20:41.786 14:26:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:41.786 14:26:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.786 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 14:26:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.787 14:26:47 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.787 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.787 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 [2024-12-05 14:26:47.190904] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.787 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.787 14:26:47 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:41.787 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.787 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 Malloc0 00:20:41.787 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.787 14:26:47 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:41.787 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.787 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.787 14:26:47 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.787 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.787 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.787 14:26:47 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.787 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.787 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 [2024-12-05 14:26:47.258689] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.787 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.787 14:26:47 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:41.787 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.787 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:41.787 [2024-12-05 14:26:47.266439] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:41.787 [ 00:20:41.787 { 00:20:41.787 "allow_any_host": true, 00:20:41.787 "hosts": [], 00:20:41.787 "listen_addresses": [], 00:20:41.787 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:41.787 "subtype": "Discovery" 00:20:41.787 }, 00:20:41.787 { 00:20:41.787 "allow_any_host": true, 00:20:41.787 "hosts": [], 00:20:41.787 "listen_addresses": [ 00:20:41.787 { 00:20:41.787 "adrfam": "IPv4", 00:20:41.787 "traddr": "10.0.0.2", 00:20:41.787 "transport": "TCP", 00:20:41.787 "trsvcid": "4420", 00:20:41.787 "trtype": "TCP" 00:20:41.787 } 00:20:41.787 ], 00:20:41.787 "max_cntlid": 65519, 00:20:41.787 "max_namespaces": 2, 00:20:41.787 "min_cntlid": 1, 00:20:41.787 "model_number": "SPDK bdev Controller", 00:20:41.787 "namespaces": [ 00:20:41.787 { 00:20:41.787 "bdev_name": "Malloc0", 00:20:41.787 "name": "Malloc0", 00:20:41.787 "nguid": "DF4B0147E7674061B21567FF32AF2C56", 00:20:41.787 "nsid": 1, 00:20:41.787 "uuid": "df4b0147-e767-4061-b215-67ff32af2c56" 00:20:41.787 } 00:20:41.787 ], 00:20:41.787 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.787 "serial_number": "SPDK00000000000001", 00:20:41.787 "subtype": "NVMe" 00:20:41.787 } 00:20:41.787 ] 00:20:41.787 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.787 14:26:47 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:41.787 14:26:47 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:41.787 14:26:47 -- host/aer.sh@33 -- # aerpid=93133 00:20:41.787 14:26:47 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:41.787 14:26:47 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:41.787 14:26:47 -- common/autotest_common.sh@1254 -- # local i=0 00:20:41.787 14:26:47 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:41.787 14:26:47 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:41.787 14:26:47 -- common/autotest_common.sh@1257 -- # i=1 00:20:41.787 14:26:47 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:41.787 14:26:47 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:41.787 14:26:47 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:41.787 14:26:47 -- common/autotest_common.sh@1257 -- # i=2 00:20:41.787 14:26:47 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:42.047 14:26:47 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:42.047 14:26:47 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:42.047 14:26:47 -- common/autotest_common.sh@1265 -- # return 0 00:20:42.047 14:26:47 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:42.047 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.047 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.047 Malloc1 00:20:42.047 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.047 14:26:47 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:42.047 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.047 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.047 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.047 14:26:47 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:42.047 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.047 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.047 Asynchronous Event Request test 00:20:42.047 Attaching to 10.0.0.2 00:20:42.047 Attached to 10.0.0.2 00:20:42.047 Registering asynchronous event callbacks... 00:20:42.047 Starting namespace attribute notice tests for all controllers... 00:20:42.047 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:42.047 aer_cb - Changed Namespace 00:20:42.047 Cleaning up... 00:20:42.047 [ 00:20:42.047 { 00:20:42.047 "allow_any_host": true, 00:20:42.047 "hosts": [], 00:20:42.047 "listen_addresses": [], 00:20:42.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:42.047 "subtype": "Discovery" 00:20:42.047 }, 00:20:42.047 { 00:20:42.047 "allow_any_host": true, 00:20:42.047 "hosts": [], 00:20:42.047 "listen_addresses": [ 00:20:42.047 { 00:20:42.047 "adrfam": "IPv4", 00:20:42.047 "traddr": "10.0.0.2", 00:20:42.047 "transport": "TCP", 00:20:42.047 "trsvcid": "4420", 00:20:42.047 "trtype": "TCP" 00:20:42.047 } 00:20:42.047 ], 00:20:42.047 "max_cntlid": 65519, 00:20:42.047 "max_namespaces": 2, 00:20:42.047 "min_cntlid": 1, 00:20:42.047 "model_number": "SPDK bdev Controller", 00:20:42.047 "namespaces": [ 00:20:42.047 { 00:20:42.047 "bdev_name": "Malloc0", 00:20:42.047 "name": "Malloc0", 00:20:42.047 "nguid": "DF4B0147E7674061B21567FF32AF2C56", 00:20:42.047 "nsid": 1, 00:20:42.047 "uuid": "df4b0147-e767-4061-b215-67ff32af2c56" 00:20:42.047 }, 00:20:42.047 { 00:20:42.047 "bdev_name": "Malloc1", 00:20:42.047 "name": "Malloc1", 00:20:42.047 "nguid": "BEF392D2F59F4D8BAAE98BF3D789236A", 00:20:42.047 "nsid": 2, 00:20:42.047 "uuid": "bef392d2-f59f-4d8b-aae9-8bf3d789236a" 00:20:42.047 } 00:20:42.047 ], 00:20:42.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.047 "serial_number": "SPDK00000000000001", 00:20:42.047 "subtype": "NVMe" 00:20:42.047 } 00:20:42.047 ] 00:20:42.047 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.047 14:26:47 -- host/aer.sh@43 -- # wait 93133 00:20:42.047 14:26:47 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:42.047 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.047 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.047 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.047 14:26:47 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:42.047 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.047 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.047 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.047 14:26:47 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.047 14:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.047 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.047 14:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.047 14:26:47 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:42.047 14:26:47 -- host/aer.sh@51 -- # nvmftestfini 00:20:42.047 14:26:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:42.047 14:26:47 -- nvmf/common.sh@116 -- # sync 00:20:42.306 14:26:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:42.306 14:26:47 -- nvmf/common.sh@119 -- # set +e 00:20:42.306 14:26:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:42.306 14:26:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:42.306 rmmod nvme_tcp 00:20:42.306 rmmod nvme_fabrics 00:20:42.306 rmmod nvme_keyring 00:20:42.306 14:26:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:42.306 14:26:47 -- nvmf/common.sh@123 -- # set -e 00:20:42.306 14:26:47 -- nvmf/common.sh@124 -- # return 0 00:20:42.306 14:26:47 -- nvmf/common.sh@477 -- # '[' -n 93079 ']' 00:20:42.306 14:26:47 -- nvmf/common.sh@478 -- # killprocess 93079 00:20:42.306 14:26:47 -- common/autotest_common.sh@936 -- # '[' -z 93079 ']' 00:20:42.306 14:26:47 -- common/autotest_common.sh@940 -- # kill -0 93079 00:20:42.306 14:26:47 -- common/autotest_common.sh@941 -- # uname 00:20:42.306 14:26:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.306 14:26:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93079 00:20:42.306 killing process with pid 93079 00:20:42.306 14:26:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:42.306 14:26:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:42.306 14:26:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93079' 00:20:42.306 14:26:47 -- common/autotest_common.sh@955 -- # kill 93079 00:20:42.306 [2024-12-05 14:26:47.831599] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:42.306 14:26:47 -- common/autotest_common.sh@960 -- # wait 93079 00:20:42.588 14:26:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:42.588 14:26:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:42.588 14:26:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:42.588 14:26:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.588 14:26:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:42.588 14:26:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.588 14:26:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.588 14:26:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.589 14:26:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:42.589 00:20:42.589 real 0m2.538s 00:20:42.589 user 0m7.129s 00:20:42.589 sys 0m0.692s 00:20:42.589 14:26:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:42.589 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:20:42.589 ************************************ 00:20:42.589 END TEST nvmf_aer 00:20:42.589 ************************************ 00:20:42.589 14:26:48 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:42.589 14:26:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.589 14:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.589 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:20:42.589 ************************************ 00:20:42.589 START TEST nvmf_async_init 00:20:42.589 ************************************ 00:20:42.589 14:26:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:42.589 * Looking for test storage... 00:20:42.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:42.589 14:26:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:42.589 14:26:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:42.589 14:26:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:42.863 14:26:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:42.863 14:26:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:42.863 14:26:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:42.863 14:26:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:42.863 14:26:48 -- scripts/common.sh@335 -- # IFS=.-: 00:20:42.863 14:26:48 -- scripts/common.sh@335 -- # read -ra ver1 00:20:42.863 14:26:48 -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.863 14:26:48 -- scripts/common.sh@336 -- # read -ra ver2 00:20:42.863 14:26:48 -- scripts/common.sh@337 -- # local 'op=<' 00:20:42.863 14:26:48 -- scripts/common.sh@339 -- # ver1_l=2 00:20:42.863 14:26:48 -- scripts/common.sh@340 -- # ver2_l=1 00:20:42.863 14:26:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:42.863 14:26:48 -- scripts/common.sh@343 -- # case "$op" in 00:20:42.863 14:26:48 -- scripts/common.sh@344 -- # : 1 00:20:42.863 14:26:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:42.863 14:26:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.863 14:26:48 -- scripts/common.sh@364 -- # decimal 1 00:20:42.863 14:26:48 -- scripts/common.sh@352 -- # local d=1 00:20:42.863 14:26:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.863 14:26:48 -- scripts/common.sh@354 -- # echo 1 00:20:42.863 14:26:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:42.863 14:26:48 -- scripts/common.sh@365 -- # decimal 2 00:20:42.863 14:26:48 -- scripts/common.sh@352 -- # local d=2 00:20:42.863 14:26:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.863 14:26:48 -- scripts/common.sh@354 -- # echo 2 00:20:42.863 14:26:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:42.863 14:26:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:42.863 14:26:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:42.863 14:26:48 -- scripts/common.sh@367 -- # return 0 00:20:42.863 14:26:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.863 14:26:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:42.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.863 --rc genhtml_branch_coverage=1 00:20:42.863 --rc genhtml_function_coverage=1 00:20:42.863 --rc genhtml_legend=1 00:20:42.863 --rc geninfo_all_blocks=1 00:20:42.863 --rc geninfo_unexecuted_blocks=1 00:20:42.863 00:20:42.863 ' 00:20:42.863 14:26:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:42.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.863 --rc genhtml_branch_coverage=1 00:20:42.863 --rc genhtml_function_coverage=1 00:20:42.863 --rc genhtml_legend=1 00:20:42.863 --rc geninfo_all_blocks=1 00:20:42.863 --rc geninfo_unexecuted_blocks=1 00:20:42.863 00:20:42.863 ' 00:20:42.863 14:26:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:42.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.863 --rc genhtml_branch_coverage=1 00:20:42.863 --rc genhtml_function_coverage=1 00:20:42.863 --rc genhtml_legend=1 00:20:42.863 --rc geninfo_all_blocks=1 00:20:42.863 --rc geninfo_unexecuted_blocks=1 00:20:42.863 00:20:42.863 ' 00:20:42.863 14:26:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:42.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.863 --rc genhtml_branch_coverage=1 00:20:42.863 --rc genhtml_function_coverage=1 00:20:42.863 --rc genhtml_legend=1 00:20:42.863 --rc geninfo_all_blocks=1 00:20:42.863 --rc geninfo_unexecuted_blocks=1 00:20:42.863 00:20:42.863 ' 00:20:42.863 14:26:48 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.863 14:26:48 -- nvmf/common.sh@7 -- # uname -s 00:20:42.863 14:26:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.863 14:26:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.863 14:26:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.864 14:26:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.864 14:26:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.864 14:26:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.864 14:26:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.864 14:26:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.864 14:26:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.864 14:26:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.864 14:26:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:42.864 14:26:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:42.864 14:26:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.864 14:26:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.864 14:26:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.864 14:26:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.864 14:26:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.864 14:26:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.864 14:26:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.864 14:26:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.864 14:26:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.864 14:26:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.864 14:26:48 -- paths/export.sh@5 -- # export PATH 00:20:42.864 14:26:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.864 14:26:48 -- nvmf/common.sh@46 -- # : 0 00:20:42.864 14:26:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.864 14:26:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.864 14:26:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.864 14:26:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.864 14:26:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.864 14:26:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.864 14:26:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.864 14:26:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.864 14:26:48 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:42.864 14:26:48 -- host/async_init.sh@14 -- # null_block_size=512 00:20:42.864 14:26:48 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:42.864 14:26:48 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:42.864 14:26:48 -- host/async_init.sh@20 -- # uuidgen 00:20:42.864 14:26:48 -- host/async_init.sh@20 -- # tr -d - 00:20:42.864 14:26:48 -- host/async_init.sh@20 -- # nguid=bd476897224f43309f3c6b38413959ba 00:20:42.864 14:26:48 -- host/async_init.sh@22 -- # nvmftestinit 00:20:42.864 14:26:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:42.864 14:26:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.864 14:26:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:42.864 14:26:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:42.864 14:26:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:42.864 14:26:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.864 14:26:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.864 14:26:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.864 14:26:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:42.864 14:26:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:42.864 14:26:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:42.864 14:26:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:42.864 14:26:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:42.864 14:26:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:42.864 14:26:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.864 14:26:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.864 14:26:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:42.864 14:26:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:42.864 14:26:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.864 14:26:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.864 14:26:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.864 14:26:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.864 14:26:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.864 14:26:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.864 14:26:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.864 14:26:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.864 14:26:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:42.864 14:26:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:42.864 Cannot find device "nvmf_tgt_br" 00:20:42.864 14:26:48 -- nvmf/common.sh@154 -- # true 00:20:42.864 14:26:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.864 Cannot find device "nvmf_tgt_br2" 00:20:42.864 14:26:48 -- nvmf/common.sh@155 -- # true 00:20:42.864 14:26:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:42.864 14:26:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:42.864 Cannot find device "nvmf_tgt_br" 00:20:42.864 14:26:48 -- nvmf/common.sh@157 -- # true 00:20:42.864 14:26:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:42.864 Cannot find device "nvmf_tgt_br2" 00:20:42.864 14:26:48 -- nvmf/common.sh@158 -- # true 00:20:42.864 14:26:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:42.864 14:26:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:42.864 14:26:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.864 14:26:48 -- nvmf/common.sh@161 -- # true 00:20:42.864 14:26:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.864 14:26:48 -- nvmf/common.sh@162 -- # true 00:20:42.864 14:26:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.864 14:26:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.864 14:26:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.864 14:26:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.864 14:26:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.128 14:26:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.128 14:26:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.128 14:26:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:43.128 14:26:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:43.128 14:26:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:43.128 14:26:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:43.128 14:26:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:43.128 14:26:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:43.128 14:26:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.128 14:26:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.128 14:26:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.128 14:26:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:43.128 14:26:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:43.128 14:26:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.128 14:26:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.128 14:26:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.128 14:26:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.128 14:26:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.128 14:26:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:43.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:20:43.128 00:20:43.128 --- 10.0.0.2 ping statistics --- 00:20:43.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.128 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:20:43.128 14:26:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:43.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:20:43.128 00:20:43.128 --- 10.0.0.3 ping statistics --- 00:20:43.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.128 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:20:43.128 14:26:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:43.128 00:20:43.128 --- 10.0.0.1 ping statistics --- 00:20:43.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.128 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:43.128 14:26:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.128 14:26:48 -- nvmf/common.sh@421 -- # return 0 00:20:43.128 14:26:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:43.128 14:26:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.128 14:26:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:43.128 14:26:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:43.128 14:26:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.128 14:26:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:43.128 14:26:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:43.128 14:26:48 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:43.128 14:26:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:43.128 14:26:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.128 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:20:43.128 14:26:48 -- nvmf/common.sh@469 -- # nvmfpid=93311 00:20:43.128 14:26:48 -- nvmf/common.sh@470 -- # waitforlisten 93311 00:20:43.128 14:26:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:43.128 14:26:48 -- common/autotest_common.sh@829 -- # '[' -z 93311 ']' 00:20:43.128 14:26:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.128 14:26:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.128 14:26:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.128 14:26:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.128 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:20:43.128 [2024-12-05 14:26:48.763087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:43.128 [2024-12-05 14:26:48.763169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.387 [2024-12-05 14:26:48.904209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.387 [2024-12-05 14:26:48.969564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:43.387 [2024-12-05 14:26:48.969674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.387 [2024-12-05 14:26:48.969685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.387 [2024-12-05 14:26:48.969693] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.387 [2024-12-05 14:26:48.969721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.323 14:26:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.323 14:26:49 -- common/autotest_common.sh@862 -- # return 0 00:20:44.323 14:26:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:44.323 14:26:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.323 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.323 14:26:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.324 14:26:49 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.324 [2024-12-05 14:26:49.849407] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.324 14:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.324 14:26:49 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.324 null0 00:20:44.324 14:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.324 14:26:49 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.324 14:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.324 14:26:49 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.324 14:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.324 14:26:49 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bd476897224f43309f3c6b38413959ba 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.324 14:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.324 14:26:49 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.324 [2024-12-05 14:26:49.889534] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.324 14:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.324 14:26:49 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:44.324 14:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.324 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.582 nvme0n1 00:20:44.582 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.582 14:26:50 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:44.582 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.582 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.582 [ 00:20:44.582 { 00:20:44.582 "aliases": [ 00:20:44.582 "bd476897-224f-4330-9f3c-6b38413959ba" 00:20:44.582 ], 00:20:44.582 "assigned_rate_limits": { 00:20:44.582 "r_mbytes_per_sec": 0, 00:20:44.582 "rw_ios_per_sec": 0, 00:20:44.582 "rw_mbytes_per_sec": 0, 00:20:44.582 "w_mbytes_per_sec": 0 00:20:44.582 }, 00:20:44.582 "block_size": 512, 00:20:44.582 "claimed": false, 00:20:44.582 "driver_specific": { 00:20:44.582 "mp_policy": "active_passive", 00:20:44.582 "nvme": [ 00:20:44.582 { 00:20:44.582 "ctrlr_data": { 00:20:44.582 "ana_reporting": false, 00:20:44.582 "cntlid": 1, 00:20:44.582 "firmware_revision": "24.01.1", 00:20:44.582 "model_number": "SPDK bdev Controller", 00:20:44.582 "multi_ctrlr": true, 00:20:44.582 "oacs": { 00:20:44.582 "firmware": 0, 00:20:44.582 "format": 0, 00:20:44.582 "ns_manage": 0, 00:20:44.582 "security": 0 00:20:44.582 }, 00:20:44.582 "serial_number": "00000000000000000000", 00:20:44.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.582 "vendor_id": "0x8086" 00:20:44.582 }, 00:20:44.582 "ns_data": { 00:20:44.582 "can_share": true, 00:20:44.582 "id": 1 00:20:44.582 }, 00:20:44.582 "trid": { 00:20:44.582 "adrfam": "IPv4", 00:20:44.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.582 "traddr": "10.0.0.2", 00:20:44.582 "trsvcid": "4420", 00:20:44.582 "trtype": "TCP" 00:20:44.582 }, 00:20:44.582 "vs": { 00:20:44.582 "nvme_version": "1.3" 00:20:44.582 } 00:20:44.582 } 00:20:44.582 ] 00:20:44.582 }, 00:20:44.582 "name": "nvme0n1", 00:20:44.582 "num_blocks": 2097152, 00:20:44.582 "product_name": "NVMe disk", 00:20:44.582 "supported_io_types": { 00:20:44.582 "abort": true, 00:20:44.582 "compare": true, 00:20:44.582 "compare_and_write": true, 00:20:44.582 "flush": true, 00:20:44.582 "nvme_admin": true, 00:20:44.582 "nvme_io": true, 00:20:44.582 "read": true, 00:20:44.582 "reset": true, 00:20:44.582 "unmap": false, 00:20:44.582 "write": true, 00:20:44.582 "write_zeroes": true 00:20:44.582 }, 00:20:44.582 "uuid": "bd476897-224f-4330-9f3c-6b38413959ba", 00:20:44.582 "zoned": false 00:20:44.582 } 00:20:44.582 ] 00:20:44.582 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.582 14:26:50 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:44.582 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.582 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.582 [2024-12-05 14:26:50.149937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:44.582 [2024-12-05 14:26:50.150017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429a00 (9): Bad file descriptor 00:20:44.840 [2024-12-05 14:26:50.291910] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:44.840 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.840 14:26:50 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:44.840 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.840 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.840 [ 00:20:44.840 { 00:20:44.840 "aliases": [ 00:20:44.840 "bd476897-224f-4330-9f3c-6b38413959ba" 00:20:44.840 ], 00:20:44.840 "assigned_rate_limits": { 00:20:44.840 "r_mbytes_per_sec": 0, 00:20:44.840 "rw_ios_per_sec": 0, 00:20:44.840 "rw_mbytes_per_sec": 0, 00:20:44.840 "w_mbytes_per_sec": 0 00:20:44.840 }, 00:20:44.840 "block_size": 512, 00:20:44.840 "claimed": false, 00:20:44.840 "driver_specific": { 00:20:44.840 "mp_policy": "active_passive", 00:20:44.840 "nvme": [ 00:20:44.840 { 00:20:44.840 "ctrlr_data": { 00:20:44.840 "ana_reporting": false, 00:20:44.840 "cntlid": 2, 00:20:44.840 "firmware_revision": "24.01.1", 00:20:44.840 "model_number": "SPDK bdev Controller", 00:20:44.840 "multi_ctrlr": true, 00:20:44.840 "oacs": { 00:20:44.840 "firmware": 0, 00:20:44.840 "format": 0, 00:20:44.840 "ns_manage": 0, 00:20:44.840 "security": 0 00:20:44.840 }, 00:20:44.840 "serial_number": "00000000000000000000", 00:20:44.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.840 "vendor_id": "0x8086" 00:20:44.840 }, 00:20:44.840 "ns_data": { 00:20:44.840 "can_share": true, 00:20:44.840 "id": 1 00:20:44.840 }, 00:20:44.840 "trid": { 00:20:44.840 "adrfam": "IPv4", 00:20:44.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.840 "traddr": "10.0.0.2", 00:20:44.840 "trsvcid": "4420", 00:20:44.840 "trtype": "TCP" 00:20:44.840 }, 00:20:44.840 "vs": { 00:20:44.840 "nvme_version": "1.3" 00:20:44.840 } 00:20:44.840 } 00:20:44.840 ] 00:20:44.840 }, 00:20:44.840 "name": "nvme0n1", 00:20:44.840 "num_blocks": 2097152, 00:20:44.840 "product_name": "NVMe disk", 00:20:44.840 "supported_io_types": { 00:20:44.841 "abort": true, 00:20:44.841 "compare": true, 00:20:44.841 "compare_and_write": true, 00:20:44.841 "flush": true, 00:20:44.841 "nvme_admin": true, 00:20:44.841 "nvme_io": true, 00:20:44.841 "read": true, 00:20:44.841 "reset": true, 00:20:44.841 "unmap": false, 00:20:44.841 "write": true, 00:20:44.841 "write_zeroes": true 00:20:44.841 }, 00:20:44.841 "uuid": "bd476897-224f-4330-9f3c-6b38413959ba", 00:20:44.841 "zoned": false 00:20:44.841 } 00:20:44.841 ] 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@53 -- # mktemp 00:20:44.841 14:26:50 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2vyUeB0gMh 00:20:44.841 14:26:50 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:44.841 14:26:50 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2vyUeB0gMh 00:20:44.841 14:26:50 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 [2024-12-05 14:26:50.354076] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.841 [2024-12-05 14:26:50.354222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2vyUeB0gMh 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2vyUeB0gMh 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 [2024-12-05 14:26:50.370075] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.841 nvme0n1 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 [ 00:20:44.841 { 00:20:44.841 "aliases": [ 00:20:44.841 "bd476897-224f-4330-9f3c-6b38413959ba" 00:20:44.841 ], 00:20:44.841 "assigned_rate_limits": { 00:20:44.841 "r_mbytes_per_sec": 0, 00:20:44.841 "rw_ios_per_sec": 0, 00:20:44.841 "rw_mbytes_per_sec": 0, 00:20:44.841 "w_mbytes_per_sec": 0 00:20:44.841 }, 00:20:44.841 "block_size": 512, 00:20:44.841 "claimed": false, 00:20:44.841 "driver_specific": { 00:20:44.841 "mp_policy": "active_passive", 00:20:44.841 "nvme": [ 00:20:44.841 { 00:20:44.841 "ctrlr_data": { 00:20:44.841 "ana_reporting": false, 00:20:44.841 "cntlid": 3, 00:20:44.841 "firmware_revision": "24.01.1", 00:20:44.841 "model_number": "SPDK bdev Controller", 00:20:44.841 "multi_ctrlr": true, 00:20:44.841 "oacs": { 00:20:44.841 "firmware": 0, 00:20:44.841 "format": 0, 00:20:44.841 "ns_manage": 0, 00:20:44.841 "security": 0 00:20:44.841 }, 00:20:44.841 "serial_number": "00000000000000000000", 00:20:44.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.841 "vendor_id": "0x8086" 00:20:44.841 }, 00:20:44.841 "ns_data": { 00:20:44.841 "can_share": true, 00:20:44.841 "id": 1 00:20:44.841 }, 00:20:44.841 "trid": { 00:20:44.841 "adrfam": "IPv4", 00:20:44.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.841 "traddr": "10.0.0.2", 00:20:44.841 "trsvcid": "4421", 00:20:44.841 "trtype": "TCP" 00:20:44.841 }, 00:20:44.841 "vs": { 00:20:44.841 "nvme_version": "1.3" 00:20:44.841 } 00:20:44.841 } 00:20:44.841 ] 00:20:44.841 }, 00:20:44.841 "name": "nvme0n1", 00:20:44.841 "num_blocks": 2097152, 00:20:44.841 "product_name": "NVMe disk", 00:20:44.841 "supported_io_types": { 00:20:44.841 "abort": true, 00:20:44.841 "compare": true, 00:20:44.841 "compare_and_write": true, 00:20:44.841 "flush": true, 00:20:44.841 "nvme_admin": true, 00:20:44.841 "nvme_io": true, 00:20:44.841 "read": true, 00:20:44.841 "reset": true, 00:20:44.841 "unmap": false, 00:20:44.841 "write": true, 00:20:44.841 "write_zeroes": true 00:20:44.841 }, 00:20:44.841 "uuid": "bd476897-224f-4330-9f3c-6b38413959ba", 00:20:44.841 "zoned": false 00:20:44.841 } 00:20:44.841 ] 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.841 14:26:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.841 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:44.841 14:26:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.841 14:26:50 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2vyUeB0gMh 00:20:45.100 14:26:50 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:45.100 14:26:50 -- host/async_init.sh@78 -- # nvmftestfini 00:20:45.101 14:26:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:45.101 14:26:50 -- nvmf/common.sh@116 -- # sync 00:20:45.101 14:26:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:45.101 14:26:50 -- nvmf/common.sh@119 -- # set +e 00:20:45.101 14:26:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:45.101 14:26:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:45.101 rmmod nvme_tcp 00:20:45.101 rmmod nvme_fabrics 00:20:45.101 rmmod nvme_keyring 00:20:45.101 14:26:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:45.101 14:26:50 -- nvmf/common.sh@123 -- # set -e 00:20:45.101 14:26:50 -- nvmf/common.sh@124 -- # return 0 00:20:45.101 14:26:50 -- nvmf/common.sh@477 -- # '[' -n 93311 ']' 00:20:45.101 14:26:50 -- nvmf/common.sh@478 -- # killprocess 93311 00:20:45.101 14:26:50 -- common/autotest_common.sh@936 -- # '[' -z 93311 ']' 00:20:45.101 14:26:50 -- common/autotest_common.sh@940 -- # kill -0 93311 00:20:45.101 14:26:50 -- common/autotest_common.sh@941 -- # uname 00:20:45.101 14:26:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:45.101 14:26:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93311 00:20:45.101 14:26:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:45.101 14:26:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:45.101 killing process with pid 93311 00:20:45.101 14:26:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93311' 00:20:45.101 14:26:50 -- common/autotest_common.sh@955 -- # kill 93311 00:20:45.101 14:26:50 -- common/autotest_common.sh@960 -- # wait 93311 00:20:45.359 14:26:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:45.359 14:26:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:45.359 14:26:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:45.359 14:26:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.359 14:26:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:45.359 14:26:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.359 14:26:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.359 14:26:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.359 14:26:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:45.359 00:20:45.359 real 0m2.741s 00:20:45.359 user 0m2.573s 00:20:45.359 sys 0m0.654s 00:20:45.360 14:26:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:45.360 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:45.360 ************************************ 00:20:45.360 END TEST nvmf_async_init 00:20:45.360 ************************************ 00:20:45.360 14:26:50 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:45.360 14:26:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:45.360 14:26:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:45.360 14:26:50 -- common/autotest_common.sh@10 -- # set +x 00:20:45.360 ************************************ 00:20:45.360 START TEST dma 00:20:45.360 ************************************ 00:20:45.360 14:26:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:45.360 * Looking for test storage... 00:20:45.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.360 14:26:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:45.360 14:26:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:45.360 14:26:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:45.619 14:26:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:45.619 14:26:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:45.619 14:26:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:45.619 14:26:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:45.619 14:26:51 -- scripts/common.sh@335 -- # IFS=.-: 00:20:45.619 14:26:51 -- scripts/common.sh@335 -- # read -ra ver1 00:20:45.619 14:26:51 -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.619 14:26:51 -- scripts/common.sh@336 -- # read -ra ver2 00:20:45.619 14:26:51 -- scripts/common.sh@337 -- # local 'op=<' 00:20:45.619 14:26:51 -- scripts/common.sh@339 -- # ver1_l=2 00:20:45.619 14:26:51 -- scripts/common.sh@340 -- # ver2_l=1 00:20:45.619 14:26:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:45.619 14:26:51 -- scripts/common.sh@343 -- # case "$op" in 00:20:45.619 14:26:51 -- scripts/common.sh@344 -- # : 1 00:20:45.619 14:26:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:45.619 14:26:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.619 14:26:51 -- scripts/common.sh@364 -- # decimal 1 00:20:45.619 14:26:51 -- scripts/common.sh@352 -- # local d=1 00:20:45.619 14:26:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.619 14:26:51 -- scripts/common.sh@354 -- # echo 1 00:20:45.619 14:26:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:45.619 14:26:51 -- scripts/common.sh@365 -- # decimal 2 00:20:45.619 14:26:51 -- scripts/common.sh@352 -- # local d=2 00:20:45.619 14:26:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.619 14:26:51 -- scripts/common.sh@354 -- # echo 2 00:20:45.619 14:26:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:45.619 14:26:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:45.619 14:26:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:45.619 14:26:51 -- scripts/common.sh@367 -- # return 0 00:20:45.619 14:26:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.619 14:26:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.619 --rc genhtml_branch_coverage=1 00:20:45.619 --rc genhtml_function_coverage=1 00:20:45.619 --rc genhtml_legend=1 00:20:45.619 --rc geninfo_all_blocks=1 00:20:45.619 --rc geninfo_unexecuted_blocks=1 00:20:45.619 00:20:45.619 ' 00:20:45.619 14:26:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.619 --rc genhtml_branch_coverage=1 00:20:45.619 --rc genhtml_function_coverage=1 00:20:45.619 --rc genhtml_legend=1 00:20:45.619 --rc geninfo_all_blocks=1 00:20:45.619 --rc geninfo_unexecuted_blocks=1 00:20:45.619 00:20:45.619 ' 00:20:45.619 14:26:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.619 --rc genhtml_branch_coverage=1 00:20:45.619 --rc genhtml_function_coverage=1 00:20:45.619 --rc genhtml_legend=1 00:20:45.619 --rc geninfo_all_blocks=1 00:20:45.619 --rc geninfo_unexecuted_blocks=1 00:20:45.619 00:20:45.619 ' 00:20:45.619 14:26:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.619 --rc genhtml_branch_coverage=1 00:20:45.619 --rc genhtml_function_coverage=1 00:20:45.619 --rc genhtml_legend=1 00:20:45.619 --rc geninfo_all_blocks=1 00:20:45.619 --rc geninfo_unexecuted_blocks=1 00:20:45.619 00:20:45.619 ' 00:20:45.619 14:26:51 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.619 14:26:51 -- nvmf/common.sh@7 -- # uname -s 00:20:45.619 14:26:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.619 14:26:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.619 14:26:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.619 14:26:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.619 14:26:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.619 14:26:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.619 14:26:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.619 14:26:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.619 14:26:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.619 14:26:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.619 14:26:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:45.619 14:26:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:45.619 14:26:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.619 14:26:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.619 14:26:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.619 14:26:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.619 14:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.619 14:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.620 14:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.620 14:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.620 14:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.620 14:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.620 14:26:51 -- paths/export.sh@5 -- # export PATH 00:20:45.620 14:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.620 14:26:51 -- nvmf/common.sh@46 -- # : 0 00:20:45.620 14:26:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:45.620 14:26:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:45.620 14:26:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:45.620 14:26:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.620 14:26:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.620 14:26:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:45.620 14:26:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:45.620 14:26:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:45.620 14:26:51 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:45.620 14:26:51 -- host/dma.sh@13 -- # exit 0 00:20:45.620 ************************************ 00:20:45.620 END TEST dma 00:20:45.620 ************************************ 00:20:45.620 00:20:45.620 real 0m0.195s 00:20:45.620 user 0m0.135s 00:20:45.620 sys 0m0.070s 00:20:45.620 14:26:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:45.620 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:20:45.620 14:26:51 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:45.620 14:26:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:45.620 14:26:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:45.620 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:20:45.620 ************************************ 00:20:45.620 START TEST nvmf_identify 00:20:45.620 ************************************ 00:20:45.620 14:26:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:45.620 * Looking for test storage... 00:20:45.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.620 14:26:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:45.620 14:26:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:45.620 14:26:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:45.880 14:26:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:45.880 14:26:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:45.880 14:26:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:45.880 14:26:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:45.880 14:26:51 -- scripts/common.sh@335 -- # IFS=.-: 00:20:45.880 14:26:51 -- scripts/common.sh@335 -- # read -ra ver1 00:20:45.880 14:26:51 -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.880 14:26:51 -- scripts/common.sh@336 -- # read -ra ver2 00:20:45.880 14:26:51 -- scripts/common.sh@337 -- # local 'op=<' 00:20:45.880 14:26:51 -- scripts/common.sh@339 -- # ver1_l=2 00:20:45.880 14:26:51 -- scripts/common.sh@340 -- # ver2_l=1 00:20:45.880 14:26:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:45.880 14:26:51 -- scripts/common.sh@343 -- # case "$op" in 00:20:45.880 14:26:51 -- scripts/common.sh@344 -- # : 1 00:20:45.880 14:26:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:45.880 14:26:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.880 14:26:51 -- scripts/common.sh@364 -- # decimal 1 00:20:45.880 14:26:51 -- scripts/common.sh@352 -- # local d=1 00:20:45.880 14:26:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.880 14:26:51 -- scripts/common.sh@354 -- # echo 1 00:20:45.880 14:26:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:45.880 14:26:51 -- scripts/common.sh@365 -- # decimal 2 00:20:45.880 14:26:51 -- scripts/common.sh@352 -- # local d=2 00:20:45.880 14:26:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.880 14:26:51 -- scripts/common.sh@354 -- # echo 2 00:20:45.880 14:26:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:45.880 14:26:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:45.880 14:26:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:45.880 14:26:51 -- scripts/common.sh@367 -- # return 0 00:20:45.880 14:26:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.880 14:26:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:45.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.880 --rc genhtml_branch_coverage=1 00:20:45.880 --rc genhtml_function_coverage=1 00:20:45.880 --rc genhtml_legend=1 00:20:45.880 --rc geninfo_all_blocks=1 00:20:45.880 --rc geninfo_unexecuted_blocks=1 00:20:45.880 00:20:45.880 ' 00:20:45.880 14:26:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:45.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.880 --rc genhtml_branch_coverage=1 00:20:45.880 --rc genhtml_function_coverage=1 00:20:45.880 --rc genhtml_legend=1 00:20:45.880 --rc geninfo_all_blocks=1 00:20:45.880 --rc geninfo_unexecuted_blocks=1 00:20:45.880 00:20:45.880 ' 00:20:45.880 14:26:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:45.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.880 --rc genhtml_branch_coverage=1 00:20:45.880 --rc genhtml_function_coverage=1 00:20:45.880 --rc genhtml_legend=1 00:20:45.880 --rc geninfo_all_blocks=1 00:20:45.880 --rc geninfo_unexecuted_blocks=1 00:20:45.880 00:20:45.880 ' 00:20:45.880 14:26:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:45.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.880 --rc genhtml_branch_coverage=1 00:20:45.880 --rc genhtml_function_coverage=1 00:20:45.880 --rc genhtml_legend=1 00:20:45.880 --rc geninfo_all_blocks=1 00:20:45.880 --rc geninfo_unexecuted_blocks=1 00:20:45.880 00:20:45.880 ' 00:20:45.880 14:26:51 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.880 14:26:51 -- nvmf/common.sh@7 -- # uname -s 00:20:45.880 14:26:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.880 14:26:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.880 14:26:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.880 14:26:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.880 14:26:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.880 14:26:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.880 14:26:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.880 14:26:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.880 14:26:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.880 14:26:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.880 14:26:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:45.880 14:26:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:45.880 14:26:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.880 14:26:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.880 14:26:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.880 14:26:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.880 14:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.880 14:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.880 14:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.880 14:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.881 14:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.881 14:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.881 14:26:51 -- paths/export.sh@5 -- # export PATH 00:20:45.881 14:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.881 14:26:51 -- nvmf/common.sh@46 -- # : 0 00:20:45.881 14:26:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:45.881 14:26:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:45.881 14:26:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:45.881 14:26:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.881 14:26:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.881 14:26:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:45.881 14:26:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:45.881 14:26:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:45.881 14:26:51 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.881 14:26:51 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.881 14:26:51 -- host/identify.sh@14 -- # nvmftestinit 00:20:45.881 14:26:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:45.881 14:26:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.881 14:26:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:45.881 14:26:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:45.881 14:26:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:45.881 14:26:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.881 14:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.881 14:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.881 14:26:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:45.881 14:26:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:45.881 14:26:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:45.881 14:26:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:45.881 14:26:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:45.881 14:26:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:45.881 14:26:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.881 14:26:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.881 14:26:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:45.881 14:26:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:45.881 14:26:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.881 14:26:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.881 14:26:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.881 14:26:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.881 14:26:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.881 14:26:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.881 14:26:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.881 14:26:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.881 14:26:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:45.881 14:26:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:45.881 Cannot find device "nvmf_tgt_br" 00:20:45.881 14:26:51 -- nvmf/common.sh@154 -- # true 00:20:45.881 14:26:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.881 Cannot find device "nvmf_tgt_br2" 00:20:45.881 14:26:51 -- nvmf/common.sh@155 -- # true 00:20:45.881 14:26:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:45.881 14:26:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:45.881 Cannot find device "nvmf_tgt_br" 00:20:45.881 14:26:51 -- nvmf/common.sh@157 -- # true 00:20:45.881 14:26:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:45.881 Cannot find device "nvmf_tgt_br2" 00:20:45.881 14:26:51 -- nvmf/common.sh@158 -- # true 00:20:45.881 14:26:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:45.881 14:26:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:45.881 14:26:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.881 14:26:51 -- nvmf/common.sh@161 -- # true 00:20:45.881 14:26:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.881 14:26:51 -- nvmf/common.sh@162 -- # true 00:20:45.881 14:26:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.881 14:26:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.881 14:26:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.881 14:26:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.881 14:26:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.140 14:26:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.140 14:26:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.140 14:26:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.140 14:26:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.140 14:26:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:46.140 14:26:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:46.141 14:26:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:46.141 14:26:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:46.141 14:26:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.141 14:26:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.141 14:26:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.141 14:26:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:46.141 14:26:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:46.141 14:26:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.141 14:26:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.141 14:26:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.141 14:26:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.141 14:26:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.141 14:26:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:46.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:46.141 00:20:46.141 --- 10.0.0.2 ping statistics --- 00:20:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.141 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:46.141 14:26:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:46.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:46.141 00:20:46.141 --- 10.0.0.3 ping statistics --- 00:20:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.141 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:46.141 14:26:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:20:46.141 00:20:46.141 --- 10.0.0.1 ping statistics --- 00:20:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.141 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:20:46.141 14:26:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.141 14:26:51 -- nvmf/common.sh@421 -- # return 0 00:20:46.141 14:26:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:46.141 14:26:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.141 14:26:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:46.141 14:26:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:46.141 14:26:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.141 14:26:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:46.141 14:26:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:46.141 14:26:51 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:46.141 14:26:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.141 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.141 14:26:51 -- host/identify.sh@19 -- # nvmfpid=93591 00:20:46.141 14:26:51 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:46.141 14:26:51 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.141 14:26:51 -- host/identify.sh@23 -- # waitforlisten 93591 00:20:46.141 14:26:51 -- common/autotest_common.sh@829 -- # '[' -z 93591 ']' 00:20:46.141 14:26:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.141 14:26:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.141 14:26:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.141 14:26:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.141 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.141 [2024-12-05 14:26:51.756726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:46.141 [2024-12-05 14:26:51.756794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.400 [2024-12-05 14:26:51.890037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.400 [2024-12-05 14:26:51.952209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:46.400 [2024-12-05 14:26:51.952349] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.400 [2024-12-05 14:26:51.952361] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.400 [2024-12-05 14:26:51.952369] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.400 [2024-12-05 14:26:51.952708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.400 [2024-12-05 14:26:51.952782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.400 [2024-12-05 14:26:51.953127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.400 [2024-12-05 14:26:51.953137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.336 14:26:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.336 14:26:52 -- common/autotest_common.sh@862 -- # return 0 00:20:47.336 14:26:52 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.336 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.336 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.336 [2024-12-05 14:26:52.825638] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.336 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.336 14:26:52 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:47.336 14:26:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:47.336 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 14:26:52 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:47.337 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.337 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 Malloc0 00:20:47.337 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.337 14:26:52 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.337 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.337 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.337 14:26:52 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:47.337 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.337 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.337 14:26:52 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.337 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.337 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 [2024-12-05 14:26:52.931127] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.337 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.337 14:26:52 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:47.337 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.337 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.337 14:26:52 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:47.337 14:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.337 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 [2024-12-05 14:26:52.946912] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:47.337 [ 00:20:47.337 { 00:20:47.337 "allow_any_host": true, 00:20:47.337 "hosts": [], 00:20:47.337 "listen_addresses": [ 00:20:47.337 { 00:20:47.337 "adrfam": "IPv4", 00:20:47.337 "traddr": "10.0.0.2", 00:20:47.337 "transport": "TCP", 00:20:47.337 "trsvcid": "4420", 00:20:47.337 "trtype": "TCP" 00:20:47.337 } 00:20:47.337 ], 00:20:47.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:47.337 "subtype": "Discovery" 00:20:47.337 }, 00:20:47.337 { 00:20:47.337 "allow_any_host": true, 00:20:47.337 "hosts": [], 00:20:47.337 "listen_addresses": [ 00:20:47.337 { 00:20:47.337 "adrfam": "IPv4", 00:20:47.337 "traddr": "10.0.0.2", 00:20:47.337 "transport": "TCP", 00:20:47.337 "trsvcid": "4420", 00:20:47.337 "trtype": "TCP" 00:20:47.337 } 00:20:47.337 ], 00:20:47.337 "max_cntlid": 65519, 00:20:47.337 "max_namespaces": 32, 00:20:47.337 "min_cntlid": 1, 00:20:47.337 "model_number": "SPDK bdev Controller", 00:20:47.337 "namespaces": [ 00:20:47.337 { 00:20:47.337 "bdev_name": "Malloc0", 00:20:47.337 "eui64": "ABCDEF0123456789", 00:20:47.337 "name": "Malloc0", 00:20:47.337 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:47.337 "nsid": 1, 00:20:47.337 "uuid": "c9d1cf8d-489c-4b48-ac23-b5c5d681bb24" 00:20:47.337 } 00:20:47.337 ], 00:20:47.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.337 "serial_number": "SPDK00000000000001", 00:20:47.337 "subtype": "NVMe" 00:20:47.337 } 00:20:47.337 ] 00:20:47.337 14:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.337 14:26:52 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:47.337 [2024-12-05 14:26:52.981339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:47.337 [2024-12-05 14:26:52.981417] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93644 ] 00:20:47.599 [2024-12-05 14:26:53.120613] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:47.599 [2024-12-05 14:26:53.120685] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:47.599 [2024-12-05 14:26:53.120692] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:47.599 [2024-12-05 14:26:53.120701] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:47.599 [2024-12-05 14:26:53.120709] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:47.599 [2024-12-05 14:26:53.120863] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:47.599 [2024-12-05 14:26:53.120948] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2105510 0 00:20:47.599 [2024-12-05 14:26:53.132870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:47.599 [2024-12-05 14:26:53.132895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:47.599 [2024-12-05 14:26:53.132916] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:47.599 [2024-12-05 14:26:53.132920] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:47.599 [2024-12-05 14:26:53.132966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.599 [2024-12-05 14:26:53.132973] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.599 [2024-12-05 14:26:53.132977] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.132999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:47.600 [2024-12-05 14:26:53.133031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.140869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.140889] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.140910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.140915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.600 [2024-12-05 14:26:53.140926] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:47.600 [2024-12-05 14:26:53.140933] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:47.600 [2024-12-05 14:26:53.140939] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:47.600 [2024-12-05 14:26:53.140954] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.140959] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.140963] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.140971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.600 [2024-12-05 14:26:53.140999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.141070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.141077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.141080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.600 [2024-12-05 14:26:53.141091] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:47.600 [2024-12-05 14:26:53.141098] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:47.600 [2024-12-05 14:26:53.141105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141109] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.141134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.600 [2024-12-05 14:26:53.141171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.141218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.141225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.141232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141235] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.600 [2024-12-05 14:26:53.141242] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:47.600 [2024-12-05 14:26:53.141251] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:47.600 [2024-12-05 14:26:53.141258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.141272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.600 [2024-12-05 14:26:53.141292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.141341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.141348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.141351] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141355] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.600 [2024-12-05 14:26:53.141362] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:47.600 [2024-12-05 14:26:53.141371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.141387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.600 [2024-12-05 14:26:53.141405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.141457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.141464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.141467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.600 [2024-12-05 14:26:53.141476] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:47.600 [2024-12-05 14:26:53.141481] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:47.600 [2024-12-05 14:26:53.141489] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:47.600 [2024-12-05 14:26:53.141595] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:47.600 [2024-12-05 14:26:53.141599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:47.600 [2024-12-05 14:26:53.141608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.141623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.600 [2024-12-05 14:26:53.141642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.141694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.141700] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.141704] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141707] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.600 [2024-12-05 14:26:53.141713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:47.600 [2024-12-05 14:26:53.141723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.600 [2024-12-05 14:26:53.141737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.600 [2024-12-05 14:26:53.141756] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.600 [2024-12-05 14:26:53.141804] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.600 [2024-12-05 14:26:53.141827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.600 [2024-12-05 14:26:53.141831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.600 [2024-12-05 14:26:53.141835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.601 [2024-12-05 14:26:53.141841] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:47.601 [2024-12-05 14:26:53.141846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:47.601 [2024-12-05 14:26:53.141867] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:47.601 [2024-12-05 14:26:53.141883] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:47.601 [2024-12-05 14:26:53.141892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.141897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.141900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.141908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.601 [2024-12-05 14:26:53.141931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.601 [2024-12-05 14:26:53.142017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.601 [2024-12-05 14:26:53.142024] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.601 [2024-12-05 14:26:53.142028] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142032] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2105510): datao=0, datal=4096, cccid=0 00:20:47.601 [2024-12-05 14:26:53.142037] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21518a0) on tqpair(0x2105510): expected_datao=0, payload_size=4096 00:20:47.601 [2024-12-05 14:26:53.142045] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142050] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.601 [2024-12-05 14:26:53.142064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.601 [2024-12-05 14:26:53.142067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.601 [2024-12-05 14:26:53.142080] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:47.601 [2024-12-05 14:26:53.142086] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:47.601 [2024-12-05 14:26:53.142090] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:47.601 [2024-12-05 14:26:53.142095] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:47.601 [2024-12-05 14:26:53.142100] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:47.601 [2024-12-05 14:26:53.142105] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:47.601 [2024-12-05 14:26:53.142118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:47.601 [2024-12-05 14:26:53.142126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.142142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.601 [2024-12-05 14:26:53.142163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.601 [2024-12-05 14:26:53.142238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.601 [2024-12-05 14:26:53.142245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.601 [2024-12-05 14:26:53.142248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21518a0) on tqpair=0x2105510 00:20:47.601 [2024-12-05 14:26:53.142260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142268] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.142274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.601 [2024-12-05 14:26:53.142280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.142293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.601 [2024-12-05 14:26:53.142299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142302] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142306] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.142311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.601 [2024-12-05 14:26:53.142317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142320] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.142329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.601 [2024-12-05 14:26:53.142334] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:47.601 [2024-12-05 14:26:53.142347] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:47.601 [2024-12-05 14:26:53.142354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2105510) 00:20:47.601 [2024-12-05 14:26:53.142368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.601 [2024-12-05 14:26:53.142390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21518a0, cid 0, qid 0 00:20:47.601 [2024-12-05 14:26:53.142397] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151a00, cid 1, qid 0 00:20:47.601 [2024-12-05 14:26:53.142401] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151b60, cid 2, qid 0 00:20:47.601 [2024-12-05 14:26:53.142405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.601 [2024-12-05 14:26:53.142409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151e20, cid 4, qid 0 00:20:47.601 [2024-12-05 14:26:53.142493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.601 [2024-12-05 14:26:53.142499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.601 [2024-12-05 14:26:53.142503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151e20) on tqpair=0x2105510 00:20:47.601 [2024-12-05 14:26:53.142513] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:47.601 [2024-12-05 14:26:53.142518] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:47.601 [2024-12-05 14:26:53.142528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.601 [2024-12-05 14:26:53.142536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2105510) 00:20:47.602 [2024-12-05 14:26:53.142543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.602 [2024-12-05 14:26:53.142562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151e20, cid 4, qid 0 00:20:47.602 [2024-12-05 14:26:53.142626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.602 [2024-12-05 14:26:53.142632] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.602 [2024-12-05 14:26:53.142636] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142640] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2105510): datao=0, datal=4096, cccid=4 00:20:47.602 [2024-12-05 14:26:53.142644] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2151e20) on tqpair(0x2105510): expected_datao=0, payload_size=4096 00:20:47.602 [2024-12-05 14:26:53.142652] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142656] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.602 [2024-12-05 14:26:53.142670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.602 [2024-12-05 14:26:53.142673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151e20) on tqpair=0x2105510 00:20:47.602 [2024-12-05 14:26:53.142690] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:47.602 [2024-12-05 14:26:53.142721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2105510) 00:20:47.602 [2024-12-05 14:26:53.142738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.602 [2024-12-05 14:26:53.142745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2105510) 00:20:47.602 [2024-12-05 14:26:53.142758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.602 [2024-12-05 14:26:53.142785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151e20, cid 4, qid 0 00:20:47.602 [2024-12-05 14:26:53.142792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151f80, cid 5, qid 0 00:20:47.602 [2024-12-05 14:26:53.142900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.602 [2024-12-05 14:26:53.142910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.602 [2024-12-05 14:26:53.142913] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142917] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2105510): datao=0, datal=1024, cccid=4 00:20:47.602 [2024-12-05 14:26:53.142921] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2151e20) on tqpair(0x2105510): expected_datao=0, payload_size=1024 00:20:47.602 [2024-12-05 14:26:53.142929] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142932] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142938] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.602 [2024-12-05 14:26:53.142944] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.602 [2024-12-05 14:26:53.142947] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.142951] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151f80) on tqpair=0x2105510 00:20:47.602 [2024-12-05 14:26:53.188906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.602 [2024-12-05 14:26:53.188932] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.602 [2024-12-05 14:26:53.188953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.188958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151e20) on tqpair=0x2105510 00:20:47.602 [2024-12-05 14:26:53.188972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.188977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.188981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2105510) 00:20:47.602 [2024-12-05 14:26:53.188990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.602 [2024-12-05 14:26:53.189025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151e20, cid 4, qid 0 00:20:47.602 [2024-12-05 14:26:53.189101] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.602 [2024-12-05 14:26:53.189108] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.602 [2024-12-05 14:26:53.189111] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189115] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2105510): datao=0, datal=3072, cccid=4 00:20:47.602 [2024-12-05 14:26:53.189119] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2151e20) on tqpair(0x2105510): expected_datao=0, payload_size=3072 00:20:47.602 [2024-12-05 14:26:53.189127] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189131] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.602 [2024-12-05 14:26:53.189145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.602 [2024-12-05 14:26:53.189149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151e20) on tqpair=0x2105510 00:20:47.602 [2024-12-05 14:26:53.189193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2105510) 00:20:47.602 [2024-12-05 14:26:53.189208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.602 [2024-12-05 14:26:53.189234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151e20, cid 4, qid 0 00:20:47.602 [2024-12-05 14:26:53.189304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.602 [2024-12-05 14:26:53.189311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.602 [2024-12-05 14:26:53.189314] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189318] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2105510): datao=0, datal=8, cccid=4 00:20:47.602 [2024-12-05 14:26:53.189322] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2151e20) on tqpair(0x2105510): expected_datao=0, payload_size=8 00:20:47.602 [2024-12-05 14:26:53.189329] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.602 [2024-12-05 14:26:53.189333] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.602 ===================================================== 00:20:47.602 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:47.602 ===================================================== 00:20:47.602 Controller Capabilities/Features 00:20:47.602 ================================ 00:20:47.602 Vendor ID: 0000 00:20:47.602 Subsystem Vendor ID: 0000 00:20:47.602 Serial Number: .................... 00:20:47.602 Model Number: ........................................ 00:20:47.602 Firmware Version: 24.01.1 00:20:47.602 Recommended Arb Burst: 0 00:20:47.602 IEEE OUI Identifier: 00 00 00 00:20:47.602 Multi-path I/O 00:20:47.602 May have multiple subsystem ports: No 00:20:47.602 May have multiple controllers: No 00:20:47.602 Associated with SR-IOV VF: No 00:20:47.602 Max Data Transfer Size: 131072 00:20:47.602 Max Number of Namespaces: 0 00:20:47.602 Max Number of I/O Queues: 1024 00:20:47.602 NVMe Specification Version (VS): 1.3 00:20:47.603 NVMe Specification Version (Identify): 1.3 00:20:47.603 Maximum Queue Entries: 128 00:20:47.603 Contiguous Queues Required: Yes 00:20:47.603 Arbitration Mechanisms Supported 00:20:47.603 Weighted Round Robin: Not Supported 00:20:47.603 Vendor Specific: Not Supported 00:20:47.603 Reset Timeout: 15000 ms 00:20:47.603 Doorbell Stride: 4 bytes 00:20:47.603 NVM Subsystem Reset: Not Supported 00:20:47.603 Command Sets Supported 00:20:47.603 NVM Command Set: Supported 00:20:47.603 Boot Partition: Not Supported 00:20:47.603 Memory Page Size Minimum: 4096 bytes 00:20:47.603 Memory Page Size Maximum: 4096 bytes 00:20:47.603 Persistent Memory Region: Not Supported 00:20:47.603 Optional Asynchronous Events Supported 00:20:47.603 Namespace Attribute Notices: Not Supported 00:20:47.603 Firmware Activation Notices: Not Supported 00:20:47.603 ANA Change Notices: Not Supported 00:20:47.603 PLE Aggregate Log Change Notices: Not Supported 00:20:47.603 LBA Status Info Alert Notices: Not Supported 00:20:47.603 EGE Aggregate Log Change Notices: Not Supported 00:20:47.603 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.603 Zone Descriptor Change Notices: Not Supported 00:20:47.603 Discovery Log Change Notices: Supported 00:20:47.603 Controller Attributes 00:20:47.603 128-bit Host Identifier: Not Supported 00:20:47.603 Non-Operational Permissive Mode: Not Supported 00:20:47.603 NVM Sets: Not Supported 00:20:47.603 Read Recovery Levels: Not Supported 00:20:47.603 Endurance Groups: Not Supported 00:20:47.603 Predictable Latency Mode: Not Supported 00:20:47.603 Traffic Based Keep ALive: Not Supported 00:20:47.603 Namespace Granularity: Not Supported 00:20:47.603 SQ Associations: Not Supported 00:20:47.603 UUID List: Not Supported 00:20:47.603 Multi-Domain Subsystem: Not Supported 00:20:47.603 Fixed Capacity Management: Not Supported 00:20:47.603 Variable Capacity Management: Not Supported 00:20:47.603 Delete Endurance Group: Not Supported 00:20:47.603 Delete NVM Set: Not Supported 00:20:47.603 Extended LBA Formats Supported: Not Supported 00:20:47.603 Flexible Data Placement Supported: Not Supported 00:20:47.603 00:20:47.603 Controller Memory Buffer Support 00:20:47.603 ================================ 00:20:47.603 Supported: No 00:20:47.603 00:20:47.603 Persistent Memory Region Support 00:20:47.603 ================================ 00:20:47.603 Supported: No 00:20:47.603 00:20:47.603 Admin Command Set Attributes 00:20:47.603 ============================ 00:20:47.603 Security Send/Receive: Not Supported 00:20:47.603 Format NVM: Not Supported 00:20:47.603 Firmware Activate/Download: Not Supported 00:20:47.603 Namespace Management: Not Supported 00:20:47.603 Device Self-Test: Not Supported 00:20:47.603 Directives: Not Supported 00:20:47.603 NVMe-MI: Not Supported 00:20:47.603 Virtualization Management: Not Supported 00:20:47.603 Doorbell Buffer Config: Not Supported 00:20:47.603 Get LBA Status Capability: Not Supported 00:20:47.603 Command & Feature Lockdown Capability: Not Supported 00:20:47.603 Abort Command Limit: 1 00:20:47.603 Async Event Request Limit: 4 00:20:47.603 Number of Firmware Slots: N/A 00:20:47.603 Firmware Slot 1 Read-Only: N/A 00:20:47.603 Fi[2024-12-05 14:26:53.229894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.603 [2024-12-05 14:26:53.229916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.603 [2024-12-05 14:26:53.229937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.603 [2024-12-05 14:26:53.229941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151e20) on tqpair=0x2105510 00:20:47.603 rmware Activation Without Reset: N/A 00:20:47.603 Multiple Update Detection Support: N/A 00:20:47.603 Firmware Update Granularity: No Information Provided 00:20:47.603 Per-Namespace SMART Log: No 00:20:47.603 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.603 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:47.603 Command Effects Log Page: Not Supported 00:20:47.603 Get Log Page Extended Data: Supported 00:20:47.603 Telemetry Log Pages: Not Supported 00:20:47.603 Persistent Event Log Pages: Not Supported 00:20:47.603 Supported Log Pages Log Page: May Support 00:20:47.603 Commands Supported & Effects Log Page: Not Supported 00:20:47.603 Feature Identifiers & Effects Log Page:May Support 00:20:47.603 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.603 Data Area 4 for Telemetry Log: Not Supported 00:20:47.603 Error Log Page Entries Supported: 128 00:20:47.603 Keep Alive: Not Supported 00:20:47.603 00:20:47.603 NVM Command Set Attributes 00:20:47.603 ========================== 00:20:47.603 Submission Queue Entry Size 00:20:47.603 Max: 1 00:20:47.603 Min: 1 00:20:47.603 Completion Queue Entry Size 00:20:47.603 Max: 1 00:20:47.603 Min: 1 00:20:47.603 Number of Namespaces: 0 00:20:47.603 Compare Command: Not Supported 00:20:47.603 Write Uncorrectable Command: Not Supported 00:20:47.603 Dataset Management Command: Not Supported 00:20:47.603 Write Zeroes Command: Not Supported 00:20:47.603 Set Features Save Field: Not Supported 00:20:47.603 Reservations: Not Supported 00:20:47.603 Timestamp: Not Supported 00:20:47.603 Copy: Not Supported 00:20:47.603 Volatile Write Cache: Not Present 00:20:47.603 Atomic Write Unit (Normal): 1 00:20:47.603 Atomic Write Unit (PFail): 1 00:20:47.603 Atomic Compare & Write Unit: 1 00:20:47.603 Fused Compare & Write: Supported 00:20:47.603 Scatter-Gather List 00:20:47.603 SGL Command Set: Supported 00:20:47.603 SGL Keyed: Supported 00:20:47.603 SGL Bit Bucket Descriptor: Not Supported 00:20:47.603 SGL Metadata Pointer: Not Supported 00:20:47.603 Oversized SGL: Not Supported 00:20:47.603 SGL Metadata Address: Not Supported 00:20:47.603 SGL Offset: Supported 00:20:47.603 Transport SGL Data Block: Not Supported 00:20:47.603 Replay Protected Memory Block: Not Supported 00:20:47.603 00:20:47.603 Firmware Slot Information 00:20:47.603 ========================= 00:20:47.603 Active slot: 0 00:20:47.603 00:20:47.603 00:20:47.603 Error Log 00:20:47.603 ========= 00:20:47.603 00:20:47.603 Active Namespaces 00:20:47.603 ================= 00:20:47.603 Discovery Log Page 00:20:47.603 ================== 00:20:47.603 Generation Counter: 2 00:20:47.603 Number of Records: 2 00:20:47.603 Record Format: 0 00:20:47.603 00:20:47.603 Discovery Log Entry 0 00:20:47.603 ---------------------- 00:20:47.603 Transport Type: 3 (TCP) 00:20:47.603 Address Family: 1 (IPv4) 00:20:47.603 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:47.603 Entry Flags: 00:20:47.603 Duplicate Returned Information: 1 00:20:47.603 Explicit Persistent Connection Support for Discovery: 1 00:20:47.603 Transport Requirements: 00:20:47.604 Secure Channel: Not Required 00:20:47.604 Port ID: 0 (0x0000) 00:20:47.604 Controller ID: 65535 (0xffff) 00:20:47.604 Admin Max SQ Size: 128 00:20:47.604 Transport Service Identifier: 4420 00:20:47.604 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:47.604 Transport Address: 10.0.0.2 00:20:47.604 Discovery Log Entry 1 00:20:47.604 ---------------------- 00:20:47.604 Transport Type: 3 (TCP) 00:20:47.604 Address Family: 1 (IPv4) 00:20:47.604 Subsystem Type: 2 (NVM Subsystem) 00:20:47.604 Entry Flags: 00:20:47.604 Duplicate Returned Information: 0 00:20:47.604 Explicit Persistent Connection Support for Discovery: 0 00:20:47.604 Transport Requirements: 00:20:47.604 Secure Channel: Not Required 00:20:47.604 Port ID: 0 (0x0000) 00:20:47.604 Controller ID: 65535 (0xffff) 00:20:47.604 Admin Max SQ Size: 128 00:20:47.604 Transport Service Identifier: 4420 00:20:47.604 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:47.604 Transport Address: 10.0.0.2 [2024-12-05 14:26:53.230046] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:47.604 [2024-12-05 14:26:53.230062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.604 [2024-12-05 14:26:53.230069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.604 [2024-12-05 14:26:53.230075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.604 [2024-12-05 14:26:53.230080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.604 [2024-12-05 14:26:53.230089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.604 [2024-12-05 14:26:53.230104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.604 [2024-12-05 14:26:53.230128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.604 [2024-12-05 14:26:53.230204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.604 [2024-12-05 14:26:53.230211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.604 [2024-12-05 14:26:53.230214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.604 [2024-12-05 14:26:53.230226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230234] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.604 [2024-12-05 14:26:53.230240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.604 [2024-12-05 14:26:53.230280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.604 [2024-12-05 14:26:53.230357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.604 [2024-12-05 14:26:53.230364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.604 [2024-12-05 14:26:53.230367] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.604 [2024-12-05 14:26:53.230377] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:47.604 [2024-12-05 14:26:53.230381] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:47.604 [2024-12-05 14:26:53.230391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.604 [2024-12-05 14:26:53.230406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.604 [2024-12-05 14:26:53.230437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.604 [2024-12-05 14:26:53.230488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.604 [2024-12-05 14:26:53.230494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.604 [2024-12-05 14:26:53.230497] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.604 [2024-12-05 14:26:53.230512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.604 [2024-12-05 14:26:53.230527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.604 [2024-12-05 14:26:53.230546] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.604 [2024-12-05 14:26:53.230597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.604 [2024-12-05 14:26:53.230603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.604 [2024-12-05 14:26:53.230607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.604 [2024-12-05 14:26:53.230621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.604 [2024-12-05 14:26:53.230635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.604 [2024-12-05 14:26:53.230654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.604 [2024-12-05 14:26:53.230704] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.604 [2024-12-05 14:26:53.230711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.604 [2024-12-05 14:26:53.230714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.604 [2024-12-05 14:26:53.230728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.604 [2024-12-05 14:26:53.230743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.604 [2024-12-05 14:26:53.230761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.604 [2024-12-05 14:26:53.230865] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.604 [2024-12-05 14:26:53.230873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.604 [2024-12-05 14:26:53.230877] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.604 [2024-12-05 14:26:53.230893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.604 [2024-12-05 14:26:53.230901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.230908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.230929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.230985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.230991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.230995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.230999] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231098] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.231102] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.231237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231251] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.231340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231344] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231386] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231437] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.231447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231495] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.231556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.605 [2024-12-05 14:26:53.231658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.605 [2024-12-05 14:26:53.231673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.605 [2024-12-05 14:26:53.231680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.605 [2024-12-05 14:26:53.231687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.605 [2024-12-05 14:26:53.231706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.605 [2024-12-05 14:26:53.231756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.605 [2024-12-05 14:26:53.231762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.231766] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.231770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.231780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.231784] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.231788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.231795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.231812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.231891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.231899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.231902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.231906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.231917] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.231921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.231924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.231931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.231951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232049] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232068] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232072] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.232102] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232184] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.232213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232263] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232273] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.232346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.232470] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232560] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.232595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232660] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232664] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232679] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.232707] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.232757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.232764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.606 [2024-12-05 14:26:53.232767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232771] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.606 [2024-12-05 14:26:53.232781] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232786] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.606 [2024-12-05 14:26:53.232789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2105510) 00:20:47.606 [2024-12-05 14:26:53.232796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.606 [2024-12-05 14:26:53.236890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2151cc0, cid 3, qid 0 00:20:47.606 [2024-12-05 14:26:53.236957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.606 [2024-12-05 14:26:53.236965] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.607 [2024-12-05 14:26:53.236969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.607 [2024-12-05 14:26:53.236973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2151cc0) on tqpair=0x2105510 00:20:47.607 [2024-12-05 14:26:53.236983] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:47.871 00:20:47.871 14:26:53 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:47.871 [2024-12-05 14:26:53.266520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:47.871 [2024-12-05 14:26:53.266591] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93652 ] 00:20:47.871 [2024-12-05 14:26:53.406597] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:47.871 [2024-12-05 14:26:53.406667] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:47.871 [2024-12-05 14:26:53.406673] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:47.871 [2024-12-05 14:26:53.406683] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:47.871 [2024-12-05 14:26:53.406690] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:47.871 [2024-12-05 14:26:53.406788] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:47.871 [2024-12-05 14:26:53.406877] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ae3510 0 00:20:47.871 [2024-12-05 14:26:53.411831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:47.871 [2024-12-05 14:26:53.411854] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:47.871 [2024-12-05 14:26:53.411875] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:47.871 [2024-12-05 14:26:53.411879] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:47.871 [2024-12-05 14:26:53.411916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.411922] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.411926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.871 [2024-12-05 14:26:53.411937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:47.871 [2024-12-05 14:26:53.411966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.871 [2024-12-05 14:26:53.419836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.871 [2024-12-05 14:26:53.419857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.871 [2024-12-05 14:26:53.419878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.419882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.871 [2024-12-05 14:26:53.419910] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:47.871 [2024-12-05 14:26:53.419917] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:47.871 [2024-12-05 14:26:53.419923] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:47.871 [2024-12-05 14:26:53.419936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.419941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.419944] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.871 [2024-12-05 14:26:53.419952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.871 [2024-12-05 14:26:53.419979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.871 [2024-12-05 14:26:53.420063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.871 [2024-12-05 14:26:53.420070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.871 [2024-12-05 14:26:53.420074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.871 [2024-12-05 14:26:53.420083] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:47.871 [2024-12-05 14:26:53.420091] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:47.871 [2024-12-05 14:26:53.420098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.871 [2024-12-05 14:26:53.420113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.871 [2024-12-05 14:26:53.420149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.871 [2024-12-05 14:26:53.420215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.871 [2024-12-05 14:26:53.420222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.871 [2024-12-05 14:26:53.420225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.871 [2024-12-05 14:26:53.420236] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:47.871 [2024-12-05 14:26:53.420244] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:47.871 [2024-12-05 14:26:53.420251] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.871 [2024-12-05 14:26:53.420272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.871 [2024-12-05 14:26:53.420291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.871 [2024-12-05 14:26:53.420350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.871 [2024-12-05 14:26:53.420356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.871 [2024-12-05 14:26:53.420360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.871 [2024-12-05 14:26:53.420370] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:47.871 [2024-12-05 14:26:53.420380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.871 [2024-12-05 14:26:53.420395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.871 [2024-12-05 14:26:53.420414] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.871 [2024-12-05 14:26:53.420464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.871 [2024-12-05 14:26:53.420470] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.871 [2024-12-05 14:26:53.420474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420478] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.871 [2024-12-05 14:26:53.420483] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:47.871 [2024-12-05 14:26:53.420489] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:47.871 [2024-12-05 14:26:53.420496] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:47.871 [2024-12-05 14:26:53.420601] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:47.871 [2024-12-05 14:26:53.420605] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:47.871 [2024-12-05 14:26:53.420613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.871 [2024-12-05 14:26:53.420621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.420628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.872 [2024-12-05 14:26:53.420647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.872 [2024-12-05 14:26:53.420696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.872 [2024-12-05 14:26:53.420702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.872 [2024-12-05 14:26:53.420705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.420709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.872 [2024-12-05 14:26:53.420716] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:47.872 [2024-12-05 14:26:53.420725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.420729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.420733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.420740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.872 [2024-12-05 14:26:53.420758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.872 [2024-12-05 14:26:53.420811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.872 [2024-12-05 14:26:53.420833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.872 [2024-12-05 14:26:53.420837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.420841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.872 [2024-12-05 14:26:53.420847] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:47.872 [2024-12-05 14:26:53.420852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:47.872 [2024-12-05 14:26:53.420860] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:47.872 [2024-12-05 14:26:53.420874] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:47.872 [2024-12-05 14:26:53.420895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.420900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.420904] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.420912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.872 [2024-12-05 14:26:53.420934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.872 [2024-12-05 14:26:53.421019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.872 [2024-12-05 14:26:53.421026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.872 [2024-12-05 14:26:53.421030] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421034] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=4096, cccid=0 00:20:47.872 [2024-12-05 14:26:53.421038] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b2f8a0) on tqpair(0x1ae3510): expected_datao=0, payload_size=4096 00:20:47.872 [2024-12-05 14:26:53.421047] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421051] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.872 [2024-12-05 14:26:53.421065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.872 [2024-12-05 14:26:53.421069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.872 [2024-12-05 14:26:53.421081] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:47.872 [2024-12-05 14:26:53.421087] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:47.872 [2024-12-05 14:26:53.421091] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:47.872 [2024-12-05 14:26:53.421096] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:47.872 [2024-12-05 14:26:53.421101] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:47.872 [2024-12-05 14:26:53.421106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:47.872 [2024-12-05 14:26:53.421119] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:47.872 [2024-12-05 14:26:53.421127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.421142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.872 [2024-12-05 14:26:53.421164] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.872 [2024-12-05 14:26:53.421229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.872 [2024-12-05 14:26:53.421236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.872 [2024-12-05 14:26:53.421239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2f8a0) on tqpair=0x1ae3510 00:20:47.872 [2024-12-05 14:26:53.421251] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.421265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.872 [2024-12-05 14:26:53.421271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421274] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.421283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.872 [2024-12-05 14:26:53.421289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.421301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.872 [2024-12-05 14:26:53.421307] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421311] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.421319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.872 [2024-12-05 14:26:53.421324] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:47.872 [2024-12-05 14:26:53.421336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:47.872 [2024-12-05 14:26:53.421343] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421347] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.872 [2024-12-05 14:26:53.421351] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.872 [2024-12-05 14:26:53.421357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.872 [2024-12-05 14:26:53.421378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2f8a0, cid 0, qid 0 00:20:47.872 [2024-12-05 14:26:53.421384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fa00, cid 1, qid 0 00:20:47.873 [2024-12-05 14:26:53.421389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fb60, cid 2, qid 0 00:20:47.873 [2024-12-05 14:26:53.421393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.873 [2024-12-05 14:26:53.421398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.873 [2024-12-05 14:26:53.421480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.873 [2024-12-05 14:26:53.421486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.873 [2024-12-05 14:26:53.421490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.873 [2024-12-05 14:26:53.421500] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:47.873 [2024-12-05 14:26:53.421505] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421513] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421524] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.873 [2024-12-05 14:26:53.421545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.873 [2024-12-05 14:26:53.421565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.873 [2024-12-05 14:26:53.421620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.873 [2024-12-05 14:26:53.421627] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.873 [2024-12-05 14:26:53.421630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.873 [2024-12-05 14:26:53.421690] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421700] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.873 [2024-12-05 14:26:53.421723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.873 [2024-12-05 14:26:53.421742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.873 [2024-12-05 14:26:53.421827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.873 [2024-12-05 14:26:53.421835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.873 [2024-12-05 14:26:53.421839] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421842] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=4096, cccid=4 00:20:47.873 [2024-12-05 14:26:53.421847] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b2fe20) on tqpair(0x1ae3510): expected_datao=0, payload_size=4096 00:20:47.873 [2024-12-05 14:26:53.421854] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421858] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.873 [2024-12-05 14:26:53.421872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.873 [2024-12-05 14:26:53.421876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.873 [2024-12-05 14:26:53.421895] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:47.873 [2024-12-05 14:26:53.421905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.421923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.421930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.873 [2024-12-05 14:26:53.421937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.873 [2024-12-05 14:26:53.421960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.873 [2024-12-05 14:26:53.422032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.873 [2024-12-05 14:26:53.422038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.873 [2024-12-05 14:26:53.422042] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422046] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=4096, cccid=4 00:20:47.873 [2024-12-05 14:26:53.422050] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b2fe20) on tqpair(0x1ae3510): expected_datao=0, payload_size=4096 00:20:47.873 [2024-12-05 14:26:53.422057] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422061] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.873 [2024-12-05 14:26:53.422075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.873 [2024-12-05 14:26:53.422078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.873 [2024-12-05 14:26:53.422098] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.422109] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:47.873 [2024-12-05 14:26:53.422117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.873 [2024-12-05 14:26:53.422131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.873 [2024-12-05 14:26:53.422152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.873 [2024-12-05 14:26:53.422211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.873 [2024-12-05 14:26:53.422217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.873 [2024-12-05 14:26:53.422221] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422224] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=4096, cccid=4 00:20:47.873 [2024-12-05 14:26:53.422229] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b2fe20) on tqpair(0x1ae3510): expected_datao=0, payload_size=4096 00:20:47.873 [2024-12-05 14:26:53.422236] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422240] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422248] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.873 [2024-12-05 14:26:53.422254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.873 [2024-12-05 14:26:53.422258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.873 [2024-12-05 14:26:53.422261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.873 [2024-12-05 14:26:53.422270] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:47.874 [2024-12-05 14:26:53.422279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:47.874 [2024-12-05 14:26:53.422290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:47.874 [2024-12-05 14:26:53.422296] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:47.874 [2024-12-05 14:26:53.422301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:47.874 [2024-12-05 14:26:53.422307] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:47.874 [2024-12-05 14:26:53.422311] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:47.874 [2024-12-05 14:26:53.422316] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:47.874 [2024-12-05 14:26:53.422330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422356] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.874 [2024-12-05 14:26:53.422390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.874 [2024-12-05 14:26:53.422397] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2ff80, cid 5, qid 0 00:20:47.874 [2024-12-05 14:26:53.422470] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.874 [2024-12-05 14:26:53.422477] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.874 [2024-12-05 14:26:53.422481] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.874 [2024-12-05 14:26:53.422492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.874 [2024-12-05 14:26:53.422498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.874 [2024-12-05 14:26:53.422502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2ff80) on tqpair=0x1ae3510 00:20:47.874 [2024-12-05 14:26:53.422517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2ff80, cid 5, qid 0 00:20:47.874 [2024-12-05 14:26:53.422608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.874 [2024-12-05 14:26:53.422614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.874 [2024-12-05 14:26:53.422618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2ff80) on tqpair=0x1ae3510 00:20:47.874 [2024-12-05 14:26:53.422634] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422642] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2ff80, cid 5, qid 0 00:20:47.874 [2024-12-05 14:26:53.422725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.874 [2024-12-05 14:26:53.422732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.874 [2024-12-05 14:26:53.422736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2ff80) on tqpair=0x1ae3510 00:20:47.874 [2024-12-05 14:26:53.422751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2ff80, cid 5, qid 0 00:20:47.874 [2024-12-05 14:26:53.422868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.874 [2024-12-05 14:26:53.422876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.874 [2024-12-05 14:26:53.422880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2ff80) on tqpair=0x1ae3510 00:20:47.874 [2024-12-05 14:26:53.422899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.422968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.422976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ae3510) 00:20:47.874 [2024-12-05 14:26:53.422983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.874 [2024-12-05 14:26:53.423006] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2ff80, cid 5, qid 0 00:20:47.874 [2024-12-05 14:26:53.423014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fe20, cid 4, qid 0 00:20:47.874 [2024-12-05 14:26:53.423019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b300e0, cid 6, qid 0 00:20:47.874 [2024-12-05 14:26:53.423024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b30240, cid 7, qid 0 00:20:47.874 [2024-12-05 14:26:53.423145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.874 [2024-12-05 14:26:53.423152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.874 [2024-12-05 14:26:53.423156] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.874 [2024-12-05 14:26:53.423160] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=8192, cccid=5 00:20:47.874 [2024-12-05 14:26:53.423164] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b2ff80) on tqpair(0x1ae3510): expected_datao=0, payload_size=8192 00:20:47.875 [2024-12-05 14:26:53.423185] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423190] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.875 [2024-12-05 14:26:53.423203] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.875 [2024-12-05 14:26:53.423207] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423211] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=512, cccid=4 00:20:47.875 [2024-12-05 14:26:53.423215] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b2fe20) on tqpair(0x1ae3510): expected_datao=0, payload_size=512 00:20:47.875 [2024-12-05 14:26:53.423223] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423227] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.875 [2024-12-05 14:26:53.423238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.875 [2024-12-05 14:26:53.423242] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423246] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=512, cccid=6 00:20:47.875 [2024-12-05 14:26:53.423250] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b300e0) on tqpair(0x1ae3510): expected_datao=0, payload_size=512 00:20:47.875 [2024-12-05 14:26:53.423257] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423261] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423267] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.875 [2024-12-05 14:26:53.423273] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.875 [2024-12-05 14:26:53.423277] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423280] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ae3510): datao=0, datal=4096, cccid=7 00:20:47.875 [2024-12-05 14:26:53.423285] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b30240) on tqpair(0x1ae3510): expected_datao=0, payload_size=4096 00:20:47.875 [2024-12-05 14:26:53.423293] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423297] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.875 ===================================================== 00:20:47.875 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.875 ===================================================== 00:20:47.875 Controller Capabilities/Features 00:20:47.875 ================================ 00:20:47.875 Vendor ID: 8086 00:20:47.875 Subsystem Vendor ID: 8086 00:20:47.875 Serial Number: SPDK00000000000001 00:20:47.875 Model Number: SPDK bdev Controller 00:20:47.875 Firmware Version: 24.01.1 00:20:47.875 Recommended Arb Burst: 6 00:20:47.875 IEEE OUI Identifier: e4 d2 5c 00:20:47.875 Multi-path I/O 00:20:47.875 May have multiple subsystem ports: Yes 00:20:47.875 May have multiple controllers: Yes 00:20:47.875 Associated with SR-IOV VF: No 00:20:47.875 Max Data Transfer Size: 131072 00:20:47.875 Max Number of Namespaces: 32 00:20:47.875 Max Number of I/O Queues: 127 00:20:47.875 NVMe Specification Version (VS): 1.3 00:20:47.875 NVMe Specification Version (Identify): 1.3 00:20:47.875 Maximum Queue Entries: 128 00:20:47.875 Contiguous Queues Required: Yes 00:20:47.875 Arbitration Mechanisms Supported 00:20:47.875 Weighted Round Robin: Not Supported 00:20:47.875 Vendor Specific: Not Supported 00:20:47.875 Reset Timeout: 15000 ms 00:20:47.875 Doorbell Stride: 4 bytes 00:20:47.875 NVM Subsystem Reset: Not Supported 00:20:47.875 Command Sets Supported 00:20:47.875 NVM Command Set: Supported 00:20:47.875 Boot Partition: Not Supported 00:20:47.875 Memory Page Size Minimum: 4096 bytes 00:20:47.875 Memory Page Size Maximum: 4096 bytes 00:20:47.875 Persistent Memory Region: Not Supported 00:20:47.875 Optional Asynchronous Events Supported 00:20:47.875 Namespace Attribute Notices: Supported 00:20:47.875 Firmware Activation Notices: Not Supported 00:20:47.875 ANA Change Notices: Not Supported 00:20:47.875 PLE Aggregate Log Change Notices: Not Supported 00:20:47.875 LBA Status Info Alert Notices: Not Supported 00:20:47.875 EGE Aggregate Log Change Notices: Not Supported 00:20:47.875 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.875 Zone Descriptor Change Notices: Not Supported 00:20:47.875 Discovery Log Change Notices: Not Supported 00:20:47.875 Controller Attributes 00:20:47.875 128-bit Host Identifier: Supported 00:20:47.875 Non-Operational Permissive Mode: Not Supported 00:20:47.875 NVM Sets: Not Supported 00:20:47.875 Read Recovery Levels: Not Supported 00:20:47.875 Endurance Groups: Not Supported 00:20:47.875 Predictable Latency Mode: Not Supported 00:20:47.875 Traffic Based Keep ALive: Not Supported 00:20:47.875 Namespace Granularity: Not Supported 00:20:47.875 SQ Associations: Not Supported 00:20:47.875 UUID List: Not Supported 00:20:47.875 Multi-Domain Subsystem: Not Supported 00:20:47.875 Fixed Capacity Management: Not Supported 00:20:47.875 Variable Capacity Management: Not Supported 00:20:47.875 Delete Endurance Group: Not Supported 00:20:47.875 Delete NVM Set: Not Supported 00:20:47.875 Extended LBA Formats Supported: Not Supported 00:20:47.875 Flexible Data Placement Supported: Not Supported 00:20:47.875 00:20:47.875 Controller Memory Buffer Support 00:20:47.875 ================================ 00:20:47.875 Supported: No 00:20:47.875 00:20:47.875 Persistent Memory Region Support 00:20:47.875 ================================ 00:20:47.875 Supported: No 00:20:47.875 00:20:47.875 Admin Command Set Attributes 00:20:47.875 ============================ 00:20:47.875 Security Send/Receive: Not Supported 00:20:47.875 Format NVM: Not Supported 00:20:47.875 Firmware Activate/Download: Not Supported 00:20:47.875 Namespace Management: Not Supported 00:20:47.875 Device Self-Test: Not Supported 00:20:47.875 Directives: Not Supported 00:20:47.875 NVMe-MI: Not Supported 00:20:47.875 Virtualization Management: Not Supported 00:20:47.875 Doorbell Buffer Config: Not Supported 00:20:47.875 Get LBA Status Capability: Not Supported 00:20:47.875 Command & Feature Lockdown Capability: Not Supported 00:20:47.875 Abort Command Limit: 4 00:20:47.875 Async Event Request Limit: 4 00:20:47.875 Number of Firmware Slots: N/A 00:20:47.875 Firmware Slot 1 Read-Only: N/A 00:20:47.875 Firmware Activation Without Reset: [2024-12-05 14:26:53.423312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.875 [2024-12-05 14:26:53.423316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2ff80) on tqpair=0x1ae3510 00:20:47.875 [2024-12-05 14:26:53.423351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.875 [2024-12-05 14:26:53.423358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.875 [2024-12-05 14:26:53.423361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.875 [2024-12-05 14:26:53.423365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fe20) on tqpair=0x1ae3510 00:20:47.876 [2024-12-05 14:26:53.423376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.876 [2024-12-05 14:26:53.423382] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.876 [2024-12-05 14:26:53.423385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.876 [2024-12-05 14:26:53.423389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b300e0) on tqpair=0x1ae3510 00:20:47.876 [2024-12-05 14:26:53.423397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.876 [2024-12-05 14:26:53.423403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.876 [2024-12-05 14:26:53.423406] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.876 [2024-12-05 14:26:53.423410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b30240) on tqpair=0x1ae3510 00:20:47.876 N/A 00:20:47.876 Multiple Update Detection Support: N/A 00:20:47.876 Firmware Update Granularity: No Information Provided 00:20:47.876 Per-Namespace SMART Log: No 00:20:47.876 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.876 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:47.876 Command Effects Log Page: Supported 00:20:47.876 Get Log Page Extended Data: Supported 00:20:47.876 Telemetry Log Pages: Not Supported 00:20:47.876 Persistent Event Log Pages: Not Supported 00:20:47.876 Supported Log Pages Log Page: May Support 00:20:47.876 Commands Supported & Effects Log Page: Not Supported 00:20:47.876 Feature Identifiers & Effects Log Page:May Support 00:20:47.876 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.876 Data Area 4 for Telemetry Log: Not Supported 00:20:47.876 Error Log Page Entries Supported: 128 00:20:47.876 Keep Alive: Supported 00:20:47.876 Keep Alive Granularity: 10000 ms 00:20:47.876 00:20:47.876 NVM Command Set Attributes 00:20:47.876 ========================== 00:20:47.876 Submission Queue Entry Size 00:20:47.876 Max: 64 00:20:47.876 Min: 64 00:20:47.876 Completion Queue Entry Size 00:20:47.876 Max: 16 00:20:47.876 Min: 16 00:20:47.876 Number of Namespaces: 32 00:20:47.876 Compare Command: Supported 00:20:47.876 Write Uncorrectable Command: Not Supported 00:20:47.876 Dataset Management Command: Supported 00:20:47.876 Write Zeroes Command: Supported 00:20:47.876 Set Features Save Field: Not Supported 00:20:47.876 Reservations: Supported 00:20:47.876 Timestamp: Not Supported 00:20:47.876 Copy: Supported 00:20:47.876 Volatile Write Cache: Present 00:20:47.876 Atomic Write Unit (Normal): 1 00:20:47.876 Atomic Write Unit (PFail): 1 00:20:47.876 Atomic Compare & Write Unit: 1 00:20:47.876 Fused Compare & Write: Supported 00:20:47.876 Scatter-Gather List 00:20:47.876 SGL Command Set: Supported 00:20:47.876 SGL Keyed: Supported 00:20:47.876 SGL Bit Bucket Descriptor: Not Supported 00:20:47.876 SGL Metadata Pointer: Not Supported 00:20:47.876 Oversized SGL: Not Supported 00:20:47.876 SGL Metadata Address: Not Supported 00:20:47.876 SGL Offset: Supported 00:20:47.876 Transport SGL Data Block: Not Supported 00:20:47.876 Replay Protected Memory Block: Not Supported 00:20:47.876 00:20:47.876 Firmware Slot Information 00:20:47.876 ========================= 00:20:47.876 Active slot: 1 00:20:47.876 Slot 1 Firmware Revision: 24.01.1 00:20:47.876 00:20:47.876 00:20:47.876 Commands Supported and Effects 00:20:47.876 ============================== 00:20:47.876 Admin Commands 00:20:47.876 -------------- 00:20:47.876 Get Log Page (02h): Supported 00:20:47.876 Identify (06h): Supported 00:20:47.876 Abort (08h): Supported 00:20:47.876 Set Features (09h): Supported 00:20:47.876 Get Features (0Ah): Supported 00:20:47.876 Asynchronous Event Request (0Ch): Supported 00:20:47.876 Keep Alive (18h): Supported 00:20:47.876 I/O Commands 00:20:47.876 ------------ 00:20:47.876 Flush (00h): Supported LBA-Change 00:20:47.876 Write (01h): Supported LBA-Change 00:20:47.876 Read (02h): Supported 00:20:47.876 Compare (05h): Supported 00:20:47.876 Write Zeroes (08h): Supported LBA-Change 00:20:47.876 Dataset Management (09h): Supported LBA-Change 00:20:47.876 Copy (19h): Supported LBA-Change 00:20:47.876 Unknown (79h): Supported LBA-Change 00:20:47.876 Unknown (7Ah): Supported 00:20:47.876 00:20:47.876 Error Log 00:20:47.876 ========= 00:20:47.876 00:20:47.876 Arbitration 00:20:47.876 =========== 00:20:47.876 Arbitration Burst: 1 00:20:47.876 00:20:47.876 Power Management 00:20:47.876 ================ 00:20:47.876 Number of Power States: 1 00:20:47.876 Current Power State: Power State #0 00:20:47.876 Power State #0: 00:20:47.876 Max Power: 0.00 W 00:20:47.876 Non-Operational State: Operational 00:20:47.876 Entry Latency: Not Reported 00:20:47.876 Exit Latency: Not Reported 00:20:47.876 Relative Read Throughput: 0 00:20:47.876 Relative Read Latency: 0 00:20:47.876 Relative Write Throughput: 0 00:20:47.876 Relative Write Latency: 0 00:20:47.876 Idle Power: Not Reported 00:20:47.876 Active Power: Not Reported 00:20:47.876 Non-Operational Permissive Mode: Not Supported 00:20:47.876 00:20:47.876 Health Information 00:20:47.876 ================== 00:20:47.876 Critical Warnings: 00:20:47.876 Available Spare Space: OK 00:20:47.876 Temperature: OK 00:20:47.876 Device Reliability: OK 00:20:47.876 Read Only: No 00:20:47.876 Volatile Memory Backup: OK 00:20:47.876 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:47.876 Temperature Threshold: [2024-12-05 14:26:53.423514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.876 [2024-12-05 14:26:53.423521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.876 [2024-12-05 14:26:53.423525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ae3510) 00:20:47.876 [2024-12-05 14:26:53.423532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.876 [2024-12-05 14:26:53.423556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b30240, cid 7, qid 0 00:20:47.876 [2024-12-05 14:26:53.423619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.876 [2024-12-05 14:26:53.423626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.876 [2024-12-05 14:26:53.423644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.423648] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b30240) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.423681] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:47.877 [2024-12-05 14:26:53.423694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.877 [2024-12-05 14:26:53.423700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.877 [2024-12-05 14:26:53.423707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.877 [2024-12-05 14:26:53.423712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.877 [2024-12-05 14:26:53.423721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.423725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.423728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.423735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.423757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.877 [2024-12-05 14:26:53.423830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.877 [2024-12-05 14:26:53.423836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.877 [2024-12-05 14:26:53.423840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.423843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.427891] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.427899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.427903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.427912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.427943] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.877 [2024-12-05 14:26:53.428057] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.877 [2024-12-05 14:26:53.428065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.877 [2024-12-05 14:26:53.428068] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.428078] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:47.877 [2024-12-05 14:26:53.428083] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:47.877 [2024-12-05 14:26:53.428093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.428109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.428130] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.877 [2024-12-05 14:26:53.428183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.877 [2024-12-05 14:26:53.428189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.877 [2024-12-05 14:26:53.428193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.428208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428216] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.428223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.428242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.877 [2024-12-05 14:26:53.428295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.877 [2024-12-05 14:26:53.428301] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.877 [2024-12-05 14:26:53.428320] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.428334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428342] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.428349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.428366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.877 [2024-12-05 14:26:53.428433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.877 [2024-12-05 14:26:53.428439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.877 [2024-12-05 14:26:53.428442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.428456] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.428471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.428489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.877 [2024-12-05 14:26:53.428536] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.877 [2024-12-05 14:26:53.428542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.877 [2024-12-05 14:26:53.428545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.877 [2024-12-05 14:26:53.428559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.877 [2024-12-05 14:26:53.428567] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.877 [2024-12-05 14:26:53.428574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.877 [2024-12-05 14:26:53.428592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.428641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.428647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.428651] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428654] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.428665] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.428680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.428698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.428762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.428769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.428772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428776] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.428786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428791] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428794] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.428801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.428819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.428885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.428893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.428896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.428911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.428919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.428926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.428946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.428996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.429002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.429006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.429020] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.429035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.429054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.429102] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.429109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.429112] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429116] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.429126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.429141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.429159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.429207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.429213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.429216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.429231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.429245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.429263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.429312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.429318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.429322] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429325] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.429336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.429351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.429369] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.429419] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.429425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.429428] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429432] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.429442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.429458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.429475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.429524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.878 [2024-12-05 14:26:53.429530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.878 [2024-12-05 14:26:53.429534] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429538] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.878 [2024-12-05 14:26:53.429548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.878 [2024-12-05 14:26:53.429556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.878 [2024-12-05 14:26:53.429562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.878 [2024-12-05 14:26:53.429580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.878 [2024-12-05 14:26:53.429632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.429639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.429643] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.429657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429664] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.429671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.429689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.429740] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.429746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.429750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.429764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.429778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.429796] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.429861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.429869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.429872] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.429887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.429902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.429922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.429973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.429979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.429983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.429986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.429997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430001] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.430012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.430030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.430079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.430086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.430089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.430103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.430118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.430136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.430191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.430197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.430201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430204] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.430215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.430230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.430248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.430294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.430300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.430303] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.430318] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430326] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.430332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.430351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.430398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.430405] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.430408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430412] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.430422] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430426] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430430] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.430437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.430455] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.430501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.430507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.430511] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.879 [2024-12-05 14:26:53.430525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.879 [2024-12-05 14:26:53.430539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.879 [2024-12-05 14:26:53.430557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.879 [2024-12-05 14:26:53.430604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.879 [2024-12-05 14:26:53.430610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.879 [2024-12-05 14:26:53.430613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.879 [2024-12-05 14:26:53.430617] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.430627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.430642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.430660] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.430708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.430714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.430718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.430732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430736] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.430747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.430765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.430840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.430848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.430852] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.430866] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.430882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.430906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.430959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.430966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.430969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.430984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.430992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.430999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.431017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.431071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.431078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.431082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.431096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.431111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.431130] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.431182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.431188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.431192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.431221] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431225] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.431236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.431254] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.431305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.431312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.431315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.431329] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.431344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.431362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.431419] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.431426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.431429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.431443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.431458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.431476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.431537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.431543] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.431546] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431550] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.431561] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.880 [2024-12-05 14:26:53.431575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.880 [2024-12-05 14:26:53.431594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.880 [2024-12-05 14:26:53.431639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.880 [2024-12-05 14:26:53.431645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.880 [2024-12-05 14:26:53.431649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.880 [2024-12-05 14:26:53.431663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.880 [2024-12-05 14:26:53.431668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.431671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.881 [2024-12-05 14:26:53.431678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.881 [2024-12-05 14:26:53.431696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.881 [2024-12-05 14:26:53.431745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.881 [2024-12-05 14:26:53.431751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.881 [2024-12-05 14:26:53.431755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.431759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.881 [2024-12-05 14:26:53.431769] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.431773] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.431777] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.881 [2024-12-05 14:26:53.431783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.881 [2024-12-05 14:26:53.431801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.881 [2024-12-05 14:26:53.435822] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.881 [2024-12-05 14:26:53.435842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.881 [2024-12-05 14:26:53.435847] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.435851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.881 [2024-12-05 14:26:53.435865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.435870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.435874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ae3510) 00:20:47.881 [2024-12-05 14:26:53.435881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.881 [2024-12-05 14:26:53.435907] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b2fcc0, cid 3, qid 0 00:20:47.881 [2024-12-05 14:26:53.435979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.881 [2024-12-05 14:26:53.436010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.881 [2024-12-05 14:26:53.436022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.881 [2024-12-05 14:26:53.436026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b2fcc0) on tqpair=0x1ae3510 00:20:47.881 [2024-12-05 14:26:53.436035] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:47.881 0 Kelvin (-273 Celsius) 00:20:47.881 Available Spare: 0% 00:20:47.881 Available Spare Threshold: 0% 00:20:47.881 Life Percentage Used: 0% 00:20:47.881 Data Units Read: 0 00:20:47.881 Data Units Written: 0 00:20:47.881 Host Read Commands: 0 00:20:47.881 Host Write Commands: 0 00:20:47.881 Controller Busy Time: 0 minutes 00:20:47.881 Power Cycles: 0 00:20:47.881 Power On Hours: 0 hours 00:20:47.881 Unsafe Shutdowns: 0 00:20:47.881 Unrecoverable Media Errors: 0 00:20:47.881 Lifetime Error Log Entries: 0 00:20:47.881 Warning Temperature Time: 0 minutes 00:20:47.881 Critical Temperature Time: 0 minutes 00:20:47.881 00:20:47.881 Number of Queues 00:20:47.881 ================ 00:20:47.881 Number of I/O Submission Queues: 127 00:20:47.881 Number of I/O Completion Queues: 127 00:20:47.881 00:20:47.881 Active Namespaces 00:20:47.881 ================= 00:20:47.881 Namespace ID:1 00:20:47.881 Error Recovery Timeout: Unlimited 00:20:47.881 Command Set Identifier: NVM (00h) 00:20:47.881 Deallocate: Supported 00:20:47.881 Deallocated/Unwritten Error: Not Supported 00:20:47.881 Deallocated Read Value: Unknown 00:20:47.881 Deallocate in Write Zeroes: Not Supported 00:20:47.881 Deallocated Guard Field: 0xFFFF 00:20:47.881 Flush: Supported 00:20:47.881 Reservation: Supported 00:20:47.881 Namespace Sharing Capabilities: Multiple Controllers 00:20:47.881 Size (in LBAs): 131072 (0GiB) 00:20:47.881 Capacity (in LBAs): 131072 (0GiB) 00:20:47.881 Utilization (in LBAs): 131072 (0GiB) 00:20:47.881 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:47.881 EUI64: ABCDEF0123456789 00:20:47.881 UUID: c9d1cf8d-489c-4b48-ac23-b5c5d681bb24 00:20:47.881 Thin Provisioning: Not Supported 00:20:47.881 Per-NS Atomic Units: Yes 00:20:47.881 Atomic Boundary Size (Normal): 0 00:20:47.881 Atomic Boundary Size (PFail): 0 00:20:47.881 Atomic Boundary Offset: 0 00:20:47.881 Maximum Single Source Range Length: 65535 00:20:47.881 Maximum Copy Length: 65535 00:20:47.881 Maximum Source Range Count: 1 00:20:47.881 NGUID/EUI64 Never Reused: No 00:20:47.881 Namespace Write Protected: No 00:20:47.881 Number of LBA Formats: 1 00:20:47.881 Current LBA Format: LBA Format #00 00:20:47.881 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.881 00:20:47.881 14:26:53 -- host/identify.sh@51 -- # sync 00:20:48.141 14:26:53 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.141 14:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.141 14:26:53 -- common/autotest_common.sh@10 -- # set +x 00:20:48.141 14:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.141 14:26:53 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:48.141 14:26:53 -- host/identify.sh@56 -- # nvmftestfini 00:20:48.141 14:26:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:48.141 14:26:53 -- nvmf/common.sh@116 -- # sync 00:20:48.141 14:26:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:48.141 14:26:53 -- nvmf/common.sh@119 -- # set +e 00:20:48.141 14:26:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:48.141 14:26:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:48.141 rmmod nvme_tcp 00:20:48.141 rmmod nvme_fabrics 00:20:48.141 rmmod nvme_keyring 00:20:48.141 14:26:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:48.141 14:26:53 -- nvmf/common.sh@123 -- # set -e 00:20:48.141 14:26:53 -- nvmf/common.sh@124 -- # return 0 00:20:48.141 14:26:53 -- nvmf/common.sh@477 -- # '[' -n 93591 ']' 00:20:48.141 14:26:53 -- nvmf/common.sh@478 -- # killprocess 93591 00:20:48.141 14:26:53 -- common/autotest_common.sh@936 -- # '[' -z 93591 ']' 00:20:48.141 14:26:53 -- common/autotest_common.sh@940 -- # kill -0 93591 00:20:48.141 14:26:53 -- common/autotest_common.sh@941 -- # uname 00:20:48.141 14:26:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.141 14:26:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93591 00:20:48.141 14:26:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.141 14:26:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.141 killing process with pid 93591 00:20:48.141 14:26:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93591' 00:20:48.141 14:26:53 -- common/autotest_common.sh@955 -- # kill 93591 00:20:48.141 [2024-12-05 14:26:53.617977] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:48.141 14:26:53 -- common/autotest_common.sh@960 -- # wait 93591 00:20:48.400 14:26:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:48.400 14:26:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:48.400 14:26:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:48.400 14:26:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.400 14:26:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:48.400 14:26:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.400 14:26:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.400 14:26:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.400 14:26:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:48.400 00:20:48.400 real 0m2.722s 00:20:48.400 user 0m7.827s 00:20:48.400 sys 0m0.699s 00:20:48.400 14:26:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:48.400 14:26:53 -- common/autotest_common.sh@10 -- # set +x 00:20:48.400 ************************************ 00:20:48.400 END TEST nvmf_identify 00:20:48.400 ************************************ 00:20:48.400 14:26:53 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:48.400 14:26:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:48.400 14:26:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:48.400 14:26:53 -- common/autotest_common.sh@10 -- # set +x 00:20:48.400 ************************************ 00:20:48.400 START TEST nvmf_perf 00:20:48.400 ************************************ 00:20:48.400 14:26:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:48.400 * Looking for test storage... 00:20:48.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:48.400 14:26:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:48.400 14:26:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:48.400 14:26:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:48.660 14:26:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:48.660 14:26:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:48.660 14:26:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:48.660 14:26:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:48.660 14:26:54 -- scripts/common.sh@335 -- # IFS=.-: 00:20:48.660 14:26:54 -- scripts/common.sh@335 -- # read -ra ver1 00:20:48.660 14:26:54 -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.660 14:26:54 -- scripts/common.sh@336 -- # read -ra ver2 00:20:48.660 14:26:54 -- scripts/common.sh@337 -- # local 'op=<' 00:20:48.660 14:26:54 -- scripts/common.sh@339 -- # ver1_l=2 00:20:48.660 14:26:54 -- scripts/common.sh@340 -- # ver2_l=1 00:20:48.660 14:26:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:48.660 14:26:54 -- scripts/common.sh@343 -- # case "$op" in 00:20:48.660 14:26:54 -- scripts/common.sh@344 -- # : 1 00:20:48.660 14:26:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:48.660 14:26:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.660 14:26:54 -- scripts/common.sh@364 -- # decimal 1 00:20:48.660 14:26:54 -- scripts/common.sh@352 -- # local d=1 00:20:48.660 14:26:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.660 14:26:54 -- scripts/common.sh@354 -- # echo 1 00:20:48.660 14:26:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:48.660 14:26:54 -- scripts/common.sh@365 -- # decimal 2 00:20:48.660 14:26:54 -- scripts/common.sh@352 -- # local d=2 00:20:48.660 14:26:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.660 14:26:54 -- scripts/common.sh@354 -- # echo 2 00:20:48.660 14:26:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:48.660 14:26:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:48.660 14:26:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:48.660 14:26:54 -- scripts/common.sh@367 -- # return 0 00:20:48.660 14:26:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.660 14:26:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:48.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.660 --rc genhtml_branch_coverage=1 00:20:48.660 --rc genhtml_function_coverage=1 00:20:48.660 --rc genhtml_legend=1 00:20:48.660 --rc geninfo_all_blocks=1 00:20:48.660 --rc geninfo_unexecuted_blocks=1 00:20:48.660 00:20:48.660 ' 00:20:48.660 14:26:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:48.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.660 --rc genhtml_branch_coverage=1 00:20:48.660 --rc genhtml_function_coverage=1 00:20:48.660 --rc genhtml_legend=1 00:20:48.660 --rc geninfo_all_blocks=1 00:20:48.660 --rc geninfo_unexecuted_blocks=1 00:20:48.660 00:20:48.660 ' 00:20:48.660 14:26:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:48.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.660 --rc genhtml_branch_coverage=1 00:20:48.660 --rc genhtml_function_coverage=1 00:20:48.660 --rc genhtml_legend=1 00:20:48.660 --rc geninfo_all_blocks=1 00:20:48.660 --rc geninfo_unexecuted_blocks=1 00:20:48.660 00:20:48.660 ' 00:20:48.660 14:26:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:48.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.660 --rc genhtml_branch_coverage=1 00:20:48.660 --rc genhtml_function_coverage=1 00:20:48.660 --rc genhtml_legend=1 00:20:48.660 --rc geninfo_all_blocks=1 00:20:48.660 --rc geninfo_unexecuted_blocks=1 00:20:48.660 00:20:48.660 ' 00:20:48.660 14:26:54 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:48.660 14:26:54 -- nvmf/common.sh@7 -- # uname -s 00:20:48.660 14:26:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.660 14:26:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.660 14:26:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.660 14:26:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.660 14:26:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.660 14:26:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.660 14:26:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.660 14:26:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.660 14:26:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.660 14:26:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.660 14:26:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:48.660 14:26:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:20:48.660 14:26:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.660 14:26:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.660 14:26:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:48.660 14:26:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:48.660 14:26:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.660 14:26:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.660 14:26:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.660 14:26:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.660 14:26:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.660 14:26:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.660 14:26:54 -- paths/export.sh@5 -- # export PATH 00:20:48.660 14:26:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.660 14:26:54 -- nvmf/common.sh@46 -- # : 0 00:20:48.660 14:26:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:48.660 14:26:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:48.660 14:26:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:48.660 14:26:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.660 14:26:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.660 14:26:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:48.660 14:26:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:48.660 14:26:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:48.660 14:26:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:48.660 14:26:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:48.660 14:26:54 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.660 14:26:54 -- host/perf.sh@17 -- # nvmftestinit 00:20:48.660 14:26:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:48.660 14:26:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.660 14:26:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:48.660 14:26:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:48.660 14:26:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:48.660 14:26:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.660 14:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.660 14:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.660 14:26:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:48.660 14:26:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:48.660 14:26:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:48.660 14:26:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:48.660 14:26:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:48.660 14:26:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:48.660 14:26:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.660 14:26:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.660 14:26:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:48.660 14:26:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:48.661 14:26:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:48.661 14:26:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:48.661 14:26:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:48.661 14:26:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.661 14:26:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:48.661 14:26:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:48.661 14:26:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:48.661 14:26:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:48.661 14:26:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:48.661 14:26:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:48.661 Cannot find device "nvmf_tgt_br" 00:20:48.661 14:26:54 -- nvmf/common.sh@154 -- # true 00:20:48.661 14:26:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:48.661 Cannot find device "nvmf_tgt_br2" 00:20:48.661 14:26:54 -- nvmf/common.sh@155 -- # true 00:20:48.661 14:26:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:48.661 14:26:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:48.661 Cannot find device "nvmf_tgt_br" 00:20:48.661 14:26:54 -- nvmf/common.sh@157 -- # true 00:20:48.661 14:26:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:48.661 Cannot find device "nvmf_tgt_br2" 00:20:48.661 14:26:54 -- nvmf/common.sh@158 -- # true 00:20:48.661 14:26:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:48.661 14:26:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:48.661 14:26:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:48.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.661 14:26:54 -- nvmf/common.sh@161 -- # true 00:20:48.661 14:26:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:48.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.661 14:26:54 -- nvmf/common.sh@162 -- # true 00:20:48.661 14:26:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:48.661 14:26:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:48.661 14:26:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:48.661 14:26:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:48.920 14:26:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:48.920 14:26:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:48.920 14:26:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:48.920 14:26:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:48.920 14:26:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:48.920 14:26:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:48.920 14:26:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:48.920 14:26:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:48.920 14:26:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:48.920 14:26:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:48.920 14:26:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:48.920 14:26:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:48.920 14:26:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:48.920 14:26:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:48.920 14:26:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:48.920 14:26:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:48.920 14:26:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.920 14:26:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.920 14:26:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.920 14:26:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:48.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:48.920 00:20:48.920 --- 10.0.0.2 ping statistics --- 00:20:48.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.920 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:48.920 14:26:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:48.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:48.920 00:20:48.920 --- 10.0.0.3 ping statistics --- 00:20:48.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.920 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:48.920 14:26:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:48.920 00:20:48.920 --- 10.0.0.1 ping statistics --- 00:20:48.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.920 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:48.920 14:26:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.920 14:26:54 -- nvmf/common.sh@421 -- # return 0 00:20:48.920 14:26:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:48.920 14:26:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.920 14:26:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:48.920 14:26:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:48.920 14:26:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.920 14:26:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:48.920 14:26:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:48.920 14:26:54 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:48.920 14:26:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:48.920 14:26:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.920 14:26:54 -- common/autotest_common.sh@10 -- # set +x 00:20:48.920 14:26:54 -- nvmf/common.sh@469 -- # nvmfpid=93829 00:20:48.920 14:26:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:48.920 14:26:54 -- nvmf/common.sh@470 -- # waitforlisten 93829 00:20:48.920 14:26:54 -- common/autotest_common.sh@829 -- # '[' -z 93829 ']' 00:20:48.920 14:26:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.920 14:26:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.920 14:26:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.920 14:26:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.920 14:26:54 -- common/autotest_common.sh@10 -- # set +x 00:20:49.179 [2024-12-05 14:26:54.587638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:49.179 [2024-12-05 14:26:54.587730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.179 [2024-12-05 14:26:54.730235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.179 [2024-12-05 14:26:54.787610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:49.179 [2024-12-05 14:26:54.787739] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.179 [2024-12-05 14:26:54.787751] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.179 [2024-12-05 14:26:54.787759] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.179 [2024-12-05 14:26:54.787899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.179 [2024-12-05 14:26:54.788229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.179 [2024-12-05 14:26:54.788643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.179 [2024-12-05 14:26:54.788653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.113 14:26:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.113 14:26:55 -- common/autotest_common.sh@862 -- # return 0 00:20:50.113 14:26:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:50.113 14:26:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.113 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:20:50.113 14:26:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.113 14:26:55 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:50.113 14:26:55 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:50.370 14:26:56 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:50.370 14:26:56 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:50.937 14:26:56 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:50.937 14:26:56 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:50.937 14:26:56 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:50.937 14:26:56 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:50.937 14:26:56 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:50.937 14:26:56 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:50.937 14:26:56 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.196 [2024-12-05 14:26:56.784758] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.196 14:26:56 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.455 14:26:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:51.455 14:26:57 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:51.713 14:26:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:51.713 14:26:57 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:51.972 14:26:57 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:52.230 [2024-12-05 14:26:57.654481] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.230 14:26:57 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:52.488 14:26:57 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:52.488 14:26:57 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:52.488 14:26:57 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:52.488 14:26:57 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:53.424 Initializing NVMe Controllers 00:20:53.424 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:53.424 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:53.424 Initialization complete. Launching workers. 00:20:53.424 ======================================================== 00:20:53.424 Latency(us) 00:20:53.424 Device Information : IOPS MiB/s Average min max 00:20:53.424 PCIE (0000:00:06.0) NSID 1 from core 0: 21053.86 82.24 1520.37 278.83 8492.16 00:20:53.424 ======================================================== 00:20:53.424 Total : 21053.86 82.24 1520.37 278.83 8492.16 00:20:53.424 00:20:53.424 14:26:58 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.802 Initializing NVMe Controllers 00:20:54.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:54.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:54.802 Initialization complete. Launching workers. 00:20:54.802 ======================================================== 00:20:54.802 Latency(us) 00:20:54.802 Device Information : IOPS MiB/s Average min max 00:20:54.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3345.99 13.07 299.76 106.59 4324.52 00:20:54.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8031.88 6035.84 12004.40 00:20:54.802 ======================================================== 00:20:54.802 Total : 3470.99 13.56 578.21 106.59 12004.40 00:20:54.802 00:20:54.802 14:27:00 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.179 Initializing NVMe Controllers 00:20:56.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:56.179 Initialization complete. Launching workers. 00:20:56.179 ======================================================== 00:20:56.179 Latency(us) 00:20:56.179 Device Information : IOPS MiB/s Average min max 00:20:56.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9658.79 37.73 3313.02 566.29 7954.69 00:20:56.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2670.67 10.43 12045.33 7068.20 23376.45 00:20:56.179 ======================================================== 00:20:56.179 Total : 12329.46 48.16 5204.51 566.29 23376.45 00:20:56.179 00:20:56.179 14:27:01 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:56.179 14:27:01 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.714 Initializing NVMe Controllers 00:20:58.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.714 Controller IO queue size 128, less than required. 00:20:58.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.714 Controller IO queue size 128, less than required. 00:20:58.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:58.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:58.714 Initialization complete. Launching workers. 00:20:58.714 ======================================================== 00:20:58.714 Latency(us) 00:20:58.714 Device Information : IOPS MiB/s Average min max 00:20:58.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1612.48 403.12 80127.79 50514.06 135615.11 00:20:58.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 623.11 155.78 213307.19 74864.70 322156.15 00:20:58.714 ======================================================== 00:20:58.714 Total : 2235.58 558.90 117247.75 50514.06 322156.15 00:20:58.714 00:20:58.714 14:27:04 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:58.973 No valid NVMe controllers or AIO or URING devices found 00:20:58.973 Initializing NVMe Controllers 00:20:58.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.973 Controller IO queue size 128, less than required. 00:20:58.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.973 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:58.973 Controller IO queue size 128, less than required. 00:20:58.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.973 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:58.973 WARNING: Some requested NVMe devices were skipped 00:20:58.973 14:27:04 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:01.505 Initializing NVMe Controllers 00:21:01.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.505 Controller IO queue size 128, less than required. 00:21:01.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.505 Controller IO queue size 128, less than required. 00:21:01.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:01.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:01.505 Initialization complete. Launching workers. 00:21:01.505 00:21:01.505 ==================== 00:21:01.505 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:01.505 TCP transport: 00:21:01.505 polls: 8367 00:21:01.505 idle_polls: 5630 00:21:01.505 sock_completions: 2737 00:21:01.505 nvme_completions: 3693 00:21:01.505 submitted_requests: 5806 00:21:01.505 queued_requests: 1 00:21:01.505 00:21:01.505 ==================== 00:21:01.505 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:01.505 TCP transport: 00:21:01.505 polls: 8486 00:21:01.505 idle_polls: 5745 00:21:01.505 sock_completions: 2741 00:21:01.505 nvme_completions: 5433 00:21:01.505 submitted_requests: 8248 00:21:01.505 queued_requests: 1 00:21:01.505 ======================================================== 00:21:01.505 Latency(us) 00:21:01.505 Device Information : IOPS MiB/s Average min max 00:21:01.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 986.96 246.74 132544.01 81030.21 209035.86 00:21:01.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1421.45 355.36 91000.92 54317.36 121506.02 00:21:01.505 ======================================================== 00:21:01.505 Total : 2408.41 602.10 108025.22 54317.36 209035.86 00:21:01.505 00:21:01.505 14:27:07 -- host/perf.sh@66 -- # sync 00:21:01.505 14:27:07 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.763 14:27:07 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:01.763 14:27:07 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:21:01.763 14:27:07 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:02.020 14:27:07 -- host/perf.sh@72 -- # ls_guid=149fe5bd-3040-4a82-8164-c2bfeba8d613 00:21:02.020 14:27:07 -- host/perf.sh@73 -- # get_lvs_free_mb 149fe5bd-3040-4a82-8164-c2bfeba8d613 00:21:02.020 14:27:07 -- common/autotest_common.sh@1353 -- # local lvs_uuid=149fe5bd-3040-4a82-8164-c2bfeba8d613 00:21:02.020 14:27:07 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:02.020 14:27:07 -- common/autotest_common.sh@1355 -- # local fc 00:21:02.020 14:27:07 -- common/autotest_common.sh@1356 -- # local cs 00:21:02.020 14:27:07 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.278 14:27:07 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:02.278 { 00:21:02.278 "base_bdev": "Nvme0n1", 00:21:02.278 "block_size": 4096, 00:21:02.278 "cluster_size": 4194304, 00:21:02.278 "free_clusters": 1278, 00:21:02.278 "name": "lvs_0", 00:21:02.278 "total_data_clusters": 1278, 00:21:02.278 "uuid": "149fe5bd-3040-4a82-8164-c2bfeba8d613" 00:21:02.278 } 00:21:02.278 ]' 00:21:02.278 14:27:07 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="149fe5bd-3040-4a82-8164-c2bfeba8d613") .free_clusters' 00:21:02.536 14:27:07 -- common/autotest_common.sh@1358 -- # fc=1278 00:21:02.536 14:27:07 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="149fe5bd-3040-4a82-8164-c2bfeba8d613") .cluster_size' 00:21:02.536 5112 00:21:02.537 14:27:07 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:02.537 14:27:07 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:21:02.537 14:27:07 -- common/autotest_common.sh@1363 -- # echo 5112 00:21:02.537 14:27:07 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:02.537 14:27:07 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 149fe5bd-3040-4a82-8164-c2bfeba8d613 lbd_0 5112 00:21:02.795 14:27:08 -- host/perf.sh@80 -- # lb_guid=f605dce5-e4bf-45fa-935c-a208fd5c1259 00:21:02.795 14:27:08 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f605dce5-e4bf-45fa-935c-a208fd5c1259 lvs_n_0 00:21:03.054 14:27:08 -- host/perf.sh@83 -- # ls_nested_guid=b3380c23-86fd-4f7f-a1bf-c8d8ed109c51 00:21:03.054 14:27:08 -- host/perf.sh@84 -- # get_lvs_free_mb b3380c23-86fd-4f7f-a1bf-c8d8ed109c51 00:21:03.054 14:27:08 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b3380c23-86fd-4f7f-a1bf-c8d8ed109c51 00:21:03.054 14:27:08 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:03.054 14:27:08 -- common/autotest_common.sh@1355 -- # local fc 00:21:03.054 14:27:08 -- common/autotest_common.sh@1356 -- # local cs 00:21:03.054 14:27:08 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.313 14:27:08 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:03.313 { 00:21:03.313 "base_bdev": "Nvme0n1", 00:21:03.313 "block_size": 4096, 00:21:03.313 "cluster_size": 4194304, 00:21:03.313 "free_clusters": 0, 00:21:03.313 "name": "lvs_0", 00:21:03.313 "total_data_clusters": 1278, 00:21:03.313 "uuid": "149fe5bd-3040-4a82-8164-c2bfeba8d613" 00:21:03.313 }, 00:21:03.313 { 00:21:03.313 "base_bdev": "f605dce5-e4bf-45fa-935c-a208fd5c1259", 00:21:03.313 "block_size": 4096, 00:21:03.313 "cluster_size": 4194304, 00:21:03.313 "free_clusters": 1276, 00:21:03.313 "name": "lvs_n_0", 00:21:03.313 "total_data_clusters": 1276, 00:21:03.313 "uuid": "b3380c23-86fd-4f7f-a1bf-c8d8ed109c51" 00:21:03.313 } 00:21:03.313 ]' 00:21:03.313 14:27:08 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b3380c23-86fd-4f7f-a1bf-c8d8ed109c51") .free_clusters' 00:21:03.313 14:27:08 -- common/autotest_common.sh@1358 -- # fc=1276 00:21:03.313 14:27:08 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b3380c23-86fd-4f7f-a1bf-c8d8ed109c51") .cluster_size' 00:21:03.571 14:27:08 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:03.571 5104 00:21:03.571 14:27:08 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:21:03.571 14:27:08 -- common/autotest_common.sh@1363 -- # echo 5104 00:21:03.571 14:27:08 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:03.571 14:27:08 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3380c23-86fd-4f7f-a1bf-c8d8ed109c51 lbd_nest_0 5104 00:21:03.830 14:27:09 -- host/perf.sh@88 -- # lb_nested_guid=afc22667-ccd5-42c6-a3c5-bc7d251a82f5 00:21:03.830 14:27:09 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.830 14:27:09 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:03.830 14:27:09 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 afc22667-ccd5-42c6-a3c5-bc7d251a82f5 00:21:04.397 14:27:09 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.397 14:27:09 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:04.397 14:27:09 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:04.397 14:27:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:04.397 14:27:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:04.397 14:27:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.656 No valid NVMe controllers or AIO or URING devices found 00:21:04.656 Initializing NVMe Controllers 00:21:04.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.656 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:04.656 WARNING: Some requested NVMe devices were skipped 00:21:04.915 14:27:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:04.915 14:27:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.889 Initializing NVMe Controllers 00:21:14.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:14.889 Initialization complete. Launching workers. 00:21:14.889 ======================================================== 00:21:14.889 Latency(us) 00:21:14.889 Device Information : IOPS MiB/s Average min max 00:21:14.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 830.00 103.75 1204.51 386.43 8333.86 00:21:14.889 ======================================================== 00:21:14.889 Total : 830.00 103.75 1204.51 386.43 8333.86 00:21:14.889 00:21:15.160 14:27:20 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:15.160 14:27:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:15.160 14:27:20 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:15.433 No valid NVMe controllers or AIO or URING devices found 00:21:15.433 Initializing NVMe Controllers 00:21:15.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.433 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:15.433 WARNING: Some requested NVMe devices were skipped 00:21:15.433 14:27:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:15.433 14:27:20 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.646 Initializing NVMe Controllers 00:21:27.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.647 Initialization complete. Launching workers. 00:21:27.647 ======================================================== 00:21:27.647 Latency(us) 00:21:27.647 Device Information : IOPS MiB/s Average min max 00:21:27.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 966.37 120.80 33145.64 7976.53 455815.51 00:21:27.647 ======================================================== 00:21:27.647 Total : 966.37 120.80 33145.64 7976.53 455815.51 00:21:27.647 00:21:27.647 14:27:31 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:27.647 14:27:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:27.647 14:27:31 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.647 No valid NVMe controllers or AIO or URING devices found 00:21:27.647 Initializing NVMe Controllers 00:21:27.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.647 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:27.647 WARNING: Some requested NVMe devices were skipped 00:21:27.647 14:27:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:27.647 14:27:31 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:37.626 Initializing NVMe Controllers 00:21:37.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.626 Controller IO queue size 128, less than required. 00:21:37.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:37.626 Initialization complete. Launching workers. 00:21:37.626 ======================================================== 00:21:37.626 Latency(us) 00:21:37.626 Device Information : IOPS MiB/s Average min max 00:21:37.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4105.39 513.17 31211.91 6867.72 58083.58 00:21:37.626 ======================================================== 00:21:37.626 Total : 4105.39 513.17 31211.91 6867.72 58083.58 00:21:37.626 00:21:37.626 14:27:41 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:37.626 14:27:42 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete afc22667-ccd5-42c6-a3c5-bc7d251a82f5 00:21:37.626 14:27:42 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:37.626 14:27:42 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f605dce5-e4bf-45fa-935c-a208fd5c1259 00:21:37.626 14:27:42 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:37.626 14:27:43 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:37.626 14:27:43 -- host/perf.sh@114 -- # nvmftestfini 00:21:37.626 14:27:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.626 14:27:43 -- nvmf/common.sh@116 -- # sync 00:21:37.626 14:27:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.626 14:27:43 -- nvmf/common.sh@119 -- # set +e 00:21:37.626 14:27:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.626 14:27:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.626 rmmod nvme_tcp 00:21:37.626 rmmod nvme_fabrics 00:21:37.626 rmmod nvme_keyring 00:21:37.626 14:27:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.626 14:27:43 -- nvmf/common.sh@123 -- # set -e 00:21:37.626 14:27:43 -- nvmf/common.sh@124 -- # return 0 00:21:37.626 14:27:43 -- nvmf/common.sh@477 -- # '[' -n 93829 ']' 00:21:37.626 14:27:43 -- nvmf/common.sh@478 -- # killprocess 93829 00:21:37.626 14:27:43 -- common/autotest_common.sh@936 -- # '[' -z 93829 ']' 00:21:37.626 14:27:43 -- common/autotest_common.sh@940 -- # kill -0 93829 00:21:37.626 14:27:43 -- common/autotest_common.sh@941 -- # uname 00:21:37.626 14:27:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:37.626 14:27:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93829 00:21:37.626 killing process with pid 93829 00:21:37.626 14:27:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:37.626 14:27:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:37.626 14:27:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93829' 00:21:37.626 14:27:43 -- common/autotest_common.sh@955 -- # kill 93829 00:21:37.626 14:27:43 -- common/autotest_common.sh@960 -- # wait 93829 00:21:39.527 14:27:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:39.528 14:27:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:39.528 14:27:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:39.528 14:27:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.528 14:27:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:39.528 14:27:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.528 14:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.528 14:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.528 14:27:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:39.528 00:21:39.528 real 0m50.815s 00:21:39.528 user 3m12.091s 00:21:39.528 sys 0m9.651s 00:21:39.528 14:27:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:39.528 ************************************ 00:21:39.528 14:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:39.528 END TEST nvmf_perf 00:21:39.528 ************************************ 00:21:39.528 14:27:44 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:39.528 14:27:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.528 14:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.528 14:27:44 -- common/autotest_common.sh@10 -- # set +x 00:21:39.528 ************************************ 00:21:39.528 START TEST nvmf_fio_host 00:21:39.528 ************************************ 00:21:39.528 14:27:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:39.528 * Looking for test storage... 00:21:39.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:39.528 14:27:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:39.528 14:27:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:39.528 14:27:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:39.528 14:27:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:39.528 14:27:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:39.528 14:27:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:39.528 14:27:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:39.528 14:27:44 -- scripts/common.sh@335 -- # IFS=.-: 00:21:39.528 14:27:44 -- scripts/common.sh@335 -- # read -ra ver1 00:21:39.528 14:27:44 -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.528 14:27:44 -- scripts/common.sh@336 -- # read -ra ver2 00:21:39.528 14:27:44 -- scripts/common.sh@337 -- # local 'op=<' 00:21:39.528 14:27:44 -- scripts/common.sh@339 -- # ver1_l=2 00:21:39.528 14:27:44 -- scripts/common.sh@340 -- # ver2_l=1 00:21:39.528 14:27:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:39.528 14:27:44 -- scripts/common.sh@343 -- # case "$op" in 00:21:39.528 14:27:44 -- scripts/common.sh@344 -- # : 1 00:21:39.528 14:27:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:39.528 14:27:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.528 14:27:44 -- scripts/common.sh@364 -- # decimal 1 00:21:39.528 14:27:44 -- scripts/common.sh@352 -- # local d=1 00:21:39.528 14:27:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.528 14:27:44 -- scripts/common.sh@354 -- # echo 1 00:21:39.528 14:27:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:39.528 14:27:44 -- scripts/common.sh@365 -- # decimal 2 00:21:39.528 14:27:44 -- scripts/common.sh@352 -- # local d=2 00:21:39.528 14:27:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.528 14:27:44 -- scripts/common.sh@354 -- # echo 2 00:21:39.528 14:27:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:39.528 14:27:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:39.528 14:27:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:39.528 14:27:44 -- scripts/common.sh@367 -- # return 0 00:21:39.528 14:27:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.528 14:27:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.528 --rc genhtml_branch_coverage=1 00:21:39.528 --rc genhtml_function_coverage=1 00:21:39.528 --rc genhtml_legend=1 00:21:39.528 --rc geninfo_all_blocks=1 00:21:39.528 --rc geninfo_unexecuted_blocks=1 00:21:39.528 00:21:39.528 ' 00:21:39.528 14:27:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.528 --rc genhtml_branch_coverage=1 00:21:39.528 --rc genhtml_function_coverage=1 00:21:39.528 --rc genhtml_legend=1 00:21:39.528 --rc geninfo_all_blocks=1 00:21:39.528 --rc geninfo_unexecuted_blocks=1 00:21:39.528 00:21:39.528 ' 00:21:39.528 14:27:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.528 --rc genhtml_branch_coverage=1 00:21:39.528 --rc genhtml_function_coverage=1 00:21:39.528 --rc genhtml_legend=1 00:21:39.528 --rc geninfo_all_blocks=1 00:21:39.528 --rc geninfo_unexecuted_blocks=1 00:21:39.528 00:21:39.528 ' 00:21:39.528 14:27:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.528 --rc genhtml_branch_coverage=1 00:21:39.528 --rc genhtml_function_coverage=1 00:21:39.528 --rc genhtml_legend=1 00:21:39.528 --rc geninfo_all_blocks=1 00:21:39.528 --rc geninfo_unexecuted_blocks=1 00:21:39.528 00:21:39.528 ' 00:21:39.528 14:27:44 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:39.528 14:27:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.528 14:27:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.528 14:27:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.528 14:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.528 14:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.528 14:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.528 14:27:44 -- paths/export.sh@5 -- # export PATH 00:21:39.529 14:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.529 14:27:44 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:39.529 14:27:44 -- nvmf/common.sh@7 -- # uname -s 00:21:39.529 14:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.529 14:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.529 14:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.529 14:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.529 14:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.529 14:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.529 14:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.529 14:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.529 14:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.529 14:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.529 14:27:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:21:39.529 14:27:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:21:39.529 14:27:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.529 14:27:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.529 14:27:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:39.529 14:27:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:39.529 14:27:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.529 14:27:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.529 14:27:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.529 14:27:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.529 14:27:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.529 14:27:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.529 14:27:45 -- paths/export.sh@5 -- # export PATH 00:21:39.529 14:27:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.529 14:27:45 -- nvmf/common.sh@46 -- # : 0 00:21:39.529 14:27:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:39.529 14:27:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:39.529 14:27:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:39.529 14:27:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.529 14:27:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.529 14:27:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:39.529 14:27:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:39.529 14:27:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:39.529 14:27:45 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:39.529 14:27:45 -- host/fio.sh@14 -- # nvmftestinit 00:21:39.529 14:27:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:39.529 14:27:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.529 14:27:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:39.529 14:27:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:39.529 14:27:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:39.529 14:27:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.529 14:27:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.529 14:27:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.529 14:27:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:39.529 14:27:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:39.529 14:27:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:39.529 14:27:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:39.529 14:27:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:39.529 14:27:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:39.529 14:27:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.529 14:27:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.529 14:27:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:39.529 14:27:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:39.529 14:27:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:39.529 14:27:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:39.529 14:27:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:39.529 14:27:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.529 14:27:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:39.529 14:27:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:39.529 14:27:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:39.529 14:27:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:39.529 14:27:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:39.529 14:27:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:39.529 Cannot find device "nvmf_tgt_br" 00:21:39.529 14:27:45 -- nvmf/common.sh@154 -- # true 00:21:39.529 14:27:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.529 Cannot find device "nvmf_tgt_br2" 00:21:39.529 14:27:45 -- nvmf/common.sh@155 -- # true 00:21:39.529 14:27:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:39.529 14:27:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:39.529 Cannot find device "nvmf_tgt_br" 00:21:39.529 14:27:45 -- nvmf/common.sh@157 -- # true 00:21:39.529 14:27:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:39.529 Cannot find device "nvmf_tgt_br2" 00:21:39.529 14:27:45 -- nvmf/common.sh@158 -- # true 00:21:39.529 14:27:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:39.529 14:27:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:39.529 14:27:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:39.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.530 14:27:45 -- nvmf/common.sh@161 -- # true 00:21:39.530 14:27:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:39.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.530 14:27:45 -- nvmf/common.sh@162 -- # true 00:21:39.530 14:27:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:39.788 14:27:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:39.788 14:27:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:39.788 14:27:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:39.788 14:27:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:39.788 14:27:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:39.788 14:27:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:39.788 14:27:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:39.788 14:27:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:39.788 14:27:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:39.788 14:27:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:39.788 14:27:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:39.788 14:27:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:39.788 14:27:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:39.788 14:27:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:39.788 14:27:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:39.788 14:27:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:39.788 14:27:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:39.788 14:27:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:39.788 14:27:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:39.788 14:27:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:39.788 14:27:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:39.788 14:27:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:39.788 14:27:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:39.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:21:39.788 00:21:39.788 --- 10.0.0.2 ping statistics --- 00:21:39.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.788 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:39.788 14:27:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:39.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:39.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:21:39.788 00:21:39.788 --- 10.0.0.3 ping statistics --- 00:21:39.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.788 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:39.788 14:27:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:39.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:39.788 00:21:39.788 --- 10.0.0.1 ping statistics --- 00:21:39.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.788 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:39.788 14:27:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.788 14:27:45 -- nvmf/common.sh@421 -- # return 0 00:21:39.788 14:27:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:39.788 14:27:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.788 14:27:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:39.788 14:27:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:39.788 14:27:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.788 14:27:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:39.788 14:27:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:39.788 14:27:45 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:39.788 14:27:45 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:39.788 14:27:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.788 14:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:39.788 14:27:45 -- host/fio.sh@24 -- # nvmfpid=94807 00:21:39.788 14:27:45 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.788 14:27:45 -- host/fio.sh@28 -- # waitforlisten 94807 00:21:39.788 14:27:45 -- common/autotest_common.sh@829 -- # '[' -z 94807 ']' 00:21:39.788 14:27:45 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.788 14:27:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.788 14:27:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.788 14:27:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.788 14:27:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.788 14:27:45 -- common/autotest_common.sh@10 -- # set +x 00:21:39.788 [2024-12-05 14:27:45.433000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:39.788 [2024-12-05 14:27:45.433096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.046 [2024-12-05 14:27:45.576184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.046 [2024-12-05 14:27:45.643818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.046 [2024-12-05 14:27:45.644024] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.046 [2024-12-05 14:27:45.644042] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.046 [2024-12-05 14:27:45.644054] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.046 [2024-12-05 14:27:45.644235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.046 [2024-12-05 14:27:45.644753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.046 [2024-12-05 14:27:45.644893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.046 [2024-12-05 14:27:45.644903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.979 14:27:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.979 14:27:46 -- common/autotest_common.sh@862 -- # return 0 00:21:40.979 14:27:46 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:40.979 [2024-12-05 14:27:46.553112] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.979 14:27:46 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:40.979 14:27:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.979 14:27:46 -- common/autotest_common.sh@10 -- # set +x 00:21:41.238 14:27:46 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:41.496 Malloc1 00:21:41.496 14:27:46 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.755 14:27:47 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:41.755 14:27:47 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.014 [2024-12-05 14:27:47.591442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.014 14:27:47 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.273 14:27:47 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:42.273 14:27:47 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.273 14:27:47 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.273 14:27:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:42.273 14:27:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.273 14:27:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:42.273 14:27:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.273 14:27:47 -- common/autotest_common.sh@1330 -- # shift 00:21:42.273 14:27:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:42.273 14:27:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:42.273 14:27:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:42.273 14:27:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:42.273 14:27:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:42.273 14:27:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:42.273 14:27:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:42.273 14:27:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.533 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:42.533 fio-3.35 00:21:42.533 Starting 1 thread 00:21:45.069 00:21:45.069 test: (groupid=0, jobs=1): err= 0: pid=94930: Thu Dec 5 14:27:50 2024 00:21:45.069 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(83.6MiB/2005msec) 00:21:45.069 slat (nsec): min=1786, max=394449, avg=2309.75, stdev=3507.62 00:21:45.069 clat (usec): min=3752, max=11264, avg=6368.69, stdev=603.88 00:21:45.069 lat (usec): min=3788, max=11266, avg=6371.00, stdev=603.82 00:21:45.069 clat percentiles (usec): 00:21:45.069 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:21:45.069 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:21:45.069 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7308], 00:21:45.069 | 99.00th=[ 8160], 99.50th=[ 9372], 99.90th=[10290], 99.95th=[10945], 00:21:45.069 | 99.99th=[11207] 00:21:45.069 bw ( KiB/s): min=41456, max=43360, per=99.90%, avg=42662.00, stdev=830.80, samples=4 00:21:45.069 iops : min=10364, max=10840, avg=10665.50, stdev=207.70, samples=4 00:21:45.069 write: IOPS=10.7k, BW=41.6MiB/s (43.7MB/s)(83.5MiB/2005msec); 0 zone resets 00:21:45.069 slat (nsec): min=1803, max=348117, avg=2367.39, stdev=2814.28 00:21:45.069 clat (usec): min=2726, max=10106, avg=5583.84, stdev=489.34 00:21:45.069 lat (usec): min=2740, max=10108, avg=5586.20, stdev=489.32 00:21:45.069 clat percentiles (usec): 00:21:45.069 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5211], 00:21:45.069 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:21:45.069 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6325], 00:21:45.069 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8848], 99.95th=[ 9503], 00:21:45.069 | 99.99th=[ 9634] 00:21:45.069 bw ( KiB/s): min=41944, max=43200, per=100.00%, avg=42646.00, stdev=525.71, samples=4 00:21:45.069 iops : min=10486, max=10800, avg=10661.50, stdev=131.43, samples=4 00:21:45.069 lat (msec) : 4=0.09%, 10=99.80%, 20=0.11% 00:21:45.069 cpu : usr=64.37%, sys=25.60%, ctx=43, majf=0, minf=5 00:21:45.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:45.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:45.069 issued rwts: total=21406,21375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:45.069 00:21:45.069 Run status group 0 (all jobs): 00:21:45.069 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=83.6MiB (87.7MB), run=2005-2005msec 00:21:45.069 WRITE: bw=41.6MiB/s (43.7MB/s), 41.6MiB/s-41.6MiB/s (43.7MB/s-43.7MB/s), io=83.5MiB (87.6MB), run=2005-2005msec 00:21:45.069 14:27:50 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.069 14:27:50 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.069 14:27:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:45.069 14:27:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.069 14:27:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:45.069 14:27:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:45.069 14:27:50 -- common/autotest_common.sh@1330 -- # shift 00:21:45.069 14:27:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:45.069 14:27:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:45.069 14:27:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:45.069 14:27:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:45.069 14:27:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:45.069 14:27:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:45.069 14:27:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:45.069 14:27:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.069 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:45.069 fio-3.35 00:21:45.069 Starting 1 thread 00:21:47.601 00:21:47.601 test: (groupid=0, jobs=1): err= 0: pid=94976: Thu Dec 5 14:27:52 2024 00:21:47.601 read: IOPS=9032, BW=141MiB/s (148MB/s)(283MiB/2005msec) 00:21:47.601 slat (usec): min=2, max=107, avg= 3.43, stdev= 2.06 00:21:47.601 clat (usec): min=2464, max=16596, avg=8486.42, stdev=2092.94 00:21:47.601 lat (usec): min=2467, max=16600, avg=8489.85, stdev=2093.15 00:21:47.601 clat percentiles (usec): 00:21:47.601 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6718], 00:21:47.601 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8848], 00:21:47.601 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[12518], 00:21:47.601 | 99.00th=[14484], 99.50th=[15139], 99.90th=[15533], 99.95th=[15795], 00:21:47.601 | 99.99th=[16581] 00:21:47.601 bw ( KiB/s): min=68096, max=74176, per=49.18%, avg=71080.00, stdev=3220.56, samples=4 00:21:47.601 iops : min= 4256, max= 4636, avg=4442.50, stdev=201.29, samples=4 00:21:47.601 write: IOPS=5116, BW=79.9MiB/s (83.8MB/s)(145MiB/1809msec); 0 zone resets 00:21:47.601 slat (usec): min=29, max=294, avg=33.86, stdev= 8.15 00:21:47.601 clat (usec): min=2546, max=16484, avg=10134.80, stdev=1872.37 00:21:47.601 lat (usec): min=2578, max=16531, avg=10168.66, stdev=1874.92 00:21:47.601 clat percentiles (usec): 00:21:47.601 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8586], 00:21:47.601 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10290], 00:21:47.601 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12649], 95.00th=[13960], 00:21:47.601 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16188], 99.95th=[16319], 00:21:47.601 | 99.99th=[16450] 00:21:47.601 bw ( KiB/s): min=70496, max=77824, per=90.14%, avg=73792.00, stdev=3711.72, samples=4 00:21:47.601 iops : min= 4406, max= 4864, avg=4612.00, stdev=231.98, samples=4 00:21:47.601 lat (msec) : 4=0.48%, 10=70.74%, 20=28.78% 00:21:47.601 cpu : usr=69.06%, sys=19.31%, ctx=635, majf=0, minf=1 00:21:47.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:47.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:47.601 issued rwts: total=18110,9256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:47.601 00:21:47.601 Run status group 0 (all jobs): 00:21:47.601 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=283MiB (297MB), run=2005-2005msec 00:21:47.601 WRITE: bw=79.9MiB/s (83.8MB/s), 79.9MiB/s-79.9MiB/s (83.8MB/s-83.8MB/s), io=145MiB (152MB), run=1809-1809msec 00:21:47.601 14:27:52 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.601 14:27:53 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:47.601 14:27:53 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:47.601 14:27:53 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:47.601 14:27:53 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:47.601 14:27:53 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:47.601 14:27:53 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:47.601 14:27:53 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:47.601 14:27:53 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:47.601 14:27:53 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:47.601 14:27:53 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:47.601 14:27:53 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:47.861 Nvme0n1 00:21:47.861 14:27:53 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:48.429 14:27:53 -- host/fio.sh@53 -- # ls_guid=82c765d7-dbf1-4f7c-a9cf-902c803294d8 00:21:48.429 14:27:53 -- host/fio.sh@54 -- # get_lvs_free_mb 82c765d7-dbf1-4f7c-a9cf-902c803294d8 00:21:48.429 14:27:53 -- common/autotest_common.sh@1353 -- # local lvs_uuid=82c765d7-dbf1-4f7c-a9cf-902c803294d8 00:21:48.429 14:27:53 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:48.429 14:27:53 -- common/autotest_common.sh@1355 -- # local fc 00:21:48.429 14:27:53 -- common/autotest_common.sh@1356 -- # local cs 00:21:48.429 14:27:53 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:48.429 14:27:54 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:48.429 { 00:21:48.429 "base_bdev": "Nvme0n1", 00:21:48.429 "block_size": 4096, 00:21:48.429 "cluster_size": 1073741824, 00:21:48.429 "free_clusters": 4, 00:21:48.429 "name": "lvs_0", 00:21:48.429 "total_data_clusters": 4, 00:21:48.429 "uuid": "82c765d7-dbf1-4f7c-a9cf-902c803294d8" 00:21:48.429 } 00:21:48.429 ]' 00:21:48.429 14:27:54 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="82c765d7-dbf1-4f7c-a9cf-902c803294d8") .free_clusters' 00:21:48.429 14:27:54 -- common/autotest_common.sh@1358 -- # fc=4 00:21:48.429 14:27:54 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="82c765d7-dbf1-4f7c-a9cf-902c803294d8") .cluster_size' 00:21:48.689 4096 00:21:48.689 14:27:54 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:48.689 14:27:54 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:48.689 14:27:54 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:48.689 14:27:54 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:48.948 f5466977-b517-47cb-ba0e-a14393b9ffb4 00:21:48.948 14:27:54 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:49.207 14:27:54 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:49.207 14:27:54 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:49.466 14:27:55 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:49.466 14:27:55 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:49.466 14:27:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:49.466 14:27:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:49.466 14:27:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:49.466 14:27:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.466 14:27:55 -- common/autotest_common.sh@1330 -- # shift 00:21:49.466 14:27:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:49.466 14:27:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:49.466 14:27:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:49.466 14:27:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:49.466 14:27:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:49.466 14:27:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:49.466 14:27:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:49.467 14:27:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:49.724 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:49.724 fio-3.35 00:21:49.724 Starting 1 thread 00:21:52.255 00:21:52.256 test: (groupid=0, jobs=1): err= 0: pid=95133: Thu Dec 5 14:27:57 2024 00:21:52.256 read: IOPS=6212, BW=24.3MiB/s (25.4MB/s)(49.7MiB/2049msec) 00:21:52.256 slat (nsec): min=1747, max=335801, avg=2790.70, stdev=4746.23 00:21:52.256 clat (usec): min=4408, max=58713, avg=10882.04, stdev=3101.34 00:21:52.256 lat (usec): min=4418, max=58715, avg=10884.83, stdev=3101.25 00:21:52.256 clat percentiles (usec): 00:21:52.256 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:21:52.256 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:21:52.256 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:21:52.256 | 99.00th=[13173], 99.50th=[13829], 99.90th=[56886], 99.95th=[57934], 00:21:52.256 | 99.99th=[58459] 00:21:52.256 bw ( KiB/s): min=24112, max=26280, per=100.00%, avg=25340.00, stdev=904.87, samples=4 00:21:52.256 iops : min= 6028, max= 6570, avg=6335.00, stdev=226.22, samples=4 00:21:52.256 write: IOPS=6211, BW=24.3MiB/s (25.4MB/s)(49.7MiB/2049msec); 0 zone resets 00:21:52.256 slat (nsec): min=1818, max=246203, avg=2924.50, stdev=3722.42 00:21:52.256 clat (usec): min=2498, max=58813, avg=9581.59, stdev=3318.55 00:21:52.256 lat (usec): min=2512, max=58816, avg=9584.52, stdev=3318.47 00:21:52.256 clat percentiles (usec): 00:21:52.256 | 1.00th=[ 7373], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8717], 00:21:52.256 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:21:52.256 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:21:52.256 | 99.00th=[11338], 99.50th=[48497], 99.90th=[56886], 99.95th=[57934], 00:21:52.256 | 99.99th=[58983] 00:21:52.256 bw ( KiB/s): min=25112, max=25728, per=100.00%, avg=25320.00, stdev=276.74, samples=4 00:21:52.256 iops : min= 6278, max= 6432, avg=6330.00, stdev=69.19, samples=4 00:21:52.256 lat (msec) : 4=0.04%, 10=51.31%, 20=48.15%, 50=0.11%, 100=0.38% 00:21:52.256 cpu : usr=71.44%, sys=21.78%, ctx=6, majf=0, minf=5 00:21:52.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:52.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:52.256 issued rwts: total=12730,12727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:52.256 00:21:52.256 Run status group 0 (all jobs): 00:21:52.256 READ: bw=24.3MiB/s (25.4MB/s), 24.3MiB/s-24.3MiB/s (25.4MB/s-25.4MB/s), io=49.7MiB (52.1MB), run=2049-2049msec 00:21:52.256 WRITE: bw=24.3MiB/s (25.4MB/s), 24.3MiB/s-24.3MiB/s (25.4MB/s-25.4MB/s), io=49.7MiB (52.1MB), run=2049-2049msec 00:21:52.256 14:27:57 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:52.256 14:27:57 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:52.515 14:27:58 -- host/fio.sh@64 -- # ls_nested_guid=f3b8b790-7110-408e-89f0-e75575d82879 00:21:52.515 14:27:58 -- host/fio.sh@65 -- # get_lvs_free_mb f3b8b790-7110-408e-89f0-e75575d82879 00:21:52.515 14:27:58 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f3b8b790-7110-408e-89f0-e75575d82879 00:21:52.515 14:27:58 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:52.515 14:27:58 -- common/autotest_common.sh@1355 -- # local fc 00:21:52.515 14:27:58 -- common/autotest_common.sh@1356 -- # local cs 00:21:52.515 14:27:58 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:52.774 14:27:58 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:52.774 { 00:21:52.774 "base_bdev": "Nvme0n1", 00:21:52.774 "block_size": 4096, 00:21:52.774 "cluster_size": 1073741824, 00:21:52.774 "free_clusters": 0, 00:21:52.774 "name": "lvs_0", 00:21:52.774 "total_data_clusters": 4, 00:21:52.774 "uuid": "82c765d7-dbf1-4f7c-a9cf-902c803294d8" 00:21:52.774 }, 00:21:52.774 { 00:21:52.774 "base_bdev": "f5466977-b517-47cb-ba0e-a14393b9ffb4", 00:21:52.774 "block_size": 4096, 00:21:52.774 "cluster_size": 4194304, 00:21:52.774 "free_clusters": 1022, 00:21:52.774 "name": "lvs_n_0", 00:21:52.774 "total_data_clusters": 1022, 00:21:52.774 "uuid": "f3b8b790-7110-408e-89f0-e75575d82879" 00:21:52.774 } 00:21:52.774 ]' 00:21:52.774 14:27:58 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f3b8b790-7110-408e-89f0-e75575d82879") .free_clusters' 00:21:52.774 14:27:58 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:52.774 14:27:58 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f3b8b790-7110-408e-89f0-e75575d82879") .cluster_size' 00:21:52.774 4088 00:21:52.774 14:27:58 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:52.774 14:27:58 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:52.774 14:27:58 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:52.774 14:27:58 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:53.033 8ffb9602-fef9-44d9-9db8-fe961974c63d 00:21:53.033 14:27:58 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:53.292 14:27:58 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:53.551 14:27:59 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:53.811 14:27:59 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.811 14:27:59 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.811 14:27:59 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:53.811 14:27:59 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.811 14:27:59 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:53.811 14:27:59 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.811 14:27:59 -- common/autotest_common.sh@1330 -- # shift 00:21:53.811 14:27:59 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:53.811 14:27:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:53.811 14:27:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:53.811 14:27:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:53.811 14:27:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:53.811 14:27:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:53.811 14:27:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:53.811 14:27:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.811 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:53.811 fio-3.35 00:21:53.811 Starting 1 thread 00:21:56.347 00:21:56.347 test: (groupid=0, jobs=1): err= 0: pid=95255: Thu Dec 5 14:28:01 2024 00:21:56.347 read: IOPS=5538, BW=21.6MiB/s (22.7MB/s)(43.5MiB/2009msec) 00:21:56.347 slat (nsec): min=1710, max=302163, avg=2928.18, stdev=4910.28 00:21:56.347 clat (usec): min=5047, max=20830, avg=12308.07, stdev=1166.70 00:21:56.347 lat (usec): min=5054, max=20832, avg=12310.99, stdev=1166.48 00:21:56.347 clat percentiles (usec): 00:21:56.347 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:21:56.347 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:21:56.347 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:21:56.347 | 99.00th=[15139], 99.50th=[15533], 99.90th=[19006], 99.95th=[20579], 00:21:56.347 | 99.99th=[20841] 00:21:56.347 bw ( KiB/s): min=21104, max=22712, per=99.83%, avg=22114.00, stdev=712.86, samples=4 00:21:56.347 iops : min= 5276, max= 5678, avg=5528.50, stdev=178.22, samples=4 00:21:56.347 write: IOPS=5501, BW=21.5MiB/s (22.5MB/s)(43.2MiB/2009msec); 0 zone resets 00:21:56.347 slat (nsec): min=1804, max=232802, avg=3031.53, stdev=4062.64 00:21:56.347 clat (usec): min=2394, max=18886, avg=10772.25, stdev=1012.58 00:21:56.347 lat (usec): min=2403, max=18889, avg=10775.28, stdev=1012.43 00:21:56.347 clat percentiles (usec): 00:21:56.347 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:21:56.347 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:21:56.347 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:21:56.347 | 99.00th=[13042], 99.50th=[13304], 99.90th=[17433], 99.95th=[18482], 00:21:56.347 | 99.99th=[18744] 00:21:56.347 bw ( KiB/s): min=21760, max=22296, per=99.92%, avg=21990.00, stdev=252.25, samples=4 00:21:56.347 iops : min= 5440, max= 5574, avg=5497.50, stdev=63.06, samples=4 00:21:56.347 lat (msec) : 4=0.04%, 10=10.48%, 20=89.44%, 50=0.03% 00:21:56.347 cpu : usr=73.16%, sys=20.62%, ctx=5, majf=0, minf=5 00:21:56.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:56.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:56.347 issued rwts: total=11126,11053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:56.347 00:21:56.347 Run status group 0 (all jobs): 00:21:56.347 READ: bw=21.6MiB/s (22.7MB/s), 21.6MiB/s-21.6MiB/s (22.7MB/s-22.7MB/s), io=43.5MiB (45.6MB), run=2009-2009msec 00:21:56.347 WRITE: bw=21.5MiB/s (22.5MB/s), 21.5MiB/s-21.5MiB/s (22.5MB/s-22.5MB/s), io=43.2MiB (45.3MB), run=2009-2009msec 00:21:56.347 14:28:01 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:56.347 14:28:01 -- host/fio.sh@74 -- # sync 00:21:56.606 14:28:02 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:56.606 14:28:02 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:56.865 14:28:02 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:57.124 14:28:02 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:57.383 14:28:02 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:58.320 14:28:03 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:58.320 14:28:03 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:58.320 14:28:03 -- host/fio.sh@86 -- # nvmftestfini 00:21:58.320 14:28:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:58.320 14:28:03 -- nvmf/common.sh@116 -- # sync 00:21:58.320 14:28:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:58.320 14:28:03 -- nvmf/common.sh@119 -- # set +e 00:21:58.320 14:28:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:58.320 14:28:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:58.320 rmmod nvme_tcp 00:21:58.320 rmmod nvme_fabrics 00:21:58.320 rmmod nvme_keyring 00:21:58.320 14:28:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:58.320 14:28:03 -- nvmf/common.sh@123 -- # set -e 00:21:58.320 14:28:03 -- nvmf/common.sh@124 -- # return 0 00:21:58.320 14:28:03 -- nvmf/common.sh@477 -- # '[' -n 94807 ']' 00:21:58.320 14:28:03 -- nvmf/common.sh@478 -- # killprocess 94807 00:21:58.320 14:28:03 -- common/autotest_common.sh@936 -- # '[' -z 94807 ']' 00:21:58.320 14:28:03 -- common/autotest_common.sh@940 -- # kill -0 94807 00:21:58.320 14:28:03 -- common/autotest_common.sh@941 -- # uname 00:21:58.320 14:28:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.320 14:28:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94807 00:21:58.320 killing process with pid 94807 00:21:58.320 14:28:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:58.320 14:28:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:58.320 14:28:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94807' 00:21:58.320 14:28:03 -- common/autotest_common.sh@955 -- # kill 94807 00:21:58.320 14:28:03 -- common/autotest_common.sh@960 -- # wait 94807 00:21:58.579 14:28:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:58.579 14:28:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:58.579 14:28:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:58.579 14:28:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.579 14:28:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:58.579 14:28:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.579 14:28:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.579 14:28:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.579 14:28:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:58.579 00:21:58.579 real 0m19.369s 00:21:58.579 user 1m23.942s 00:21:58.579 sys 0m4.311s 00:21:58.579 14:28:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:58.579 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:21:58.579 ************************************ 00:21:58.579 END TEST nvmf_fio_host 00:21:58.579 ************************************ 00:21:58.839 14:28:04 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:58.839 14:28:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:58.839 14:28:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.839 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:21:58.839 ************************************ 00:21:58.839 START TEST nvmf_failover 00:21:58.839 ************************************ 00:21:58.839 14:28:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:58.839 * Looking for test storage... 00:21:58.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:58.839 14:28:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:58.839 14:28:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:58.839 14:28:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:58.839 14:28:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:58.839 14:28:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:58.839 14:28:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:58.839 14:28:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:58.839 14:28:04 -- scripts/common.sh@335 -- # IFS=.-: 00:21:58.839 14:28:04 -- scripts/common.sh@335 -- # read -ra ver1 00:21:58.839 14:28:04 -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.839 14:28:04 -- scripts/common.sh@336 -- # read -ra ver2 00:21:58.839 14:28:04 -- scripts/common.sh@337 -- # local 'op=<' 00:21:58.839 14:28:04 -- scripts/common.sh@339 -- # ver1_l=2 00:21:58.839 14:28:04 -- scripts/common.sh@340 -- # ver2_l=1 00:21:58.839 14:28:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:58.839 14:28:04 -- scripts/common.sh@343 -- # case "$op" in 00:21:58.839 14:28:04 -- scripts/common.sh@344 -- # : 1 00:21:58.839 14:28:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:58.839 14:28:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.839 14:28:04 -- scripts/common.sh@364 -- # decimal 1 00:21:58.839 14:28:04 -- scripts/common.sh@352 -- # local d=1 00:21:58.839 14:28:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.839 14:28:04 -- scripts/common.sh@354 -- # echo 1 00:21:58.839 14:28:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:58.839 14:28:04 -- scripts/common.sh@365 -- # decimal 2 00:21:58.839 14:28:04 -- scripts/common.sh@352 -- # local d=2 00:21:58.839 14:28:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.839 14:28:04 -- scripts/common.sh@354 -- # echo 2 00:21:58.839 14:28:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:58.839 14:28:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:58.839 14:28:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:58.839 14:28:04 -- scripts/common.sh@367 -- # return 0 00:21:58.839 14:28:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.839 14:28:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:58.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.839 --rc genhtml_branch_coverage=1 00:21:58.839 --rc genhtml_function_coverage=1 00:21:58.839 --rc genhtml_legend=1 00:21:58.839 --rc geninfo_all_blocks=1 00:21:58.839 --rc geninfo_unexecuted_blocks=1 00:21:58.839 00:21:58.839 ' 00:21:58.839 14:28:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:58.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.839 --rc genhtml_branch_coverage=1 00:21:58.839 --rc genhtml_function_coverage=1 00:21:58.839 --rc genhtml_legend=1 00:21:58.839 --rc geninfo_all_blocks=1 00:21:58.839 --rc geninfo_unexecuted_blocks=1 00:21:58.839 00:21:58.839 ' 00:21:58.839 14:28:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:58.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.839 --rc genhtml_branch_coverage=1 00:21:58.839 --rc genhtml_function_coverage=1 00:21:58.839 --rc genhtml_legend=1 00:21:58.839 --rc geninfo_all_blocks=1 00:21:58.839 --rc geninfo_unexecuted_blocks=1 00:21:58.839 00:21:58.839 ' 00:21:58.839 14:28:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:58.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.839 --rc genhtml_branch_coverage=1 00:21:58.839 --rc genhtml_function_coverage=1 00:21:58.839 --rc genhtml_legend=1 00:21:58.839 --rc geninfo_all_blocks=1 00:21:58.839 --rc geninfo_unexecuted_blocks=1 00:21:58.839 00:21:58.839 ' 00:21:58.839 14:28:04 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:58.839 14:28:04 -- nvmf/common.sh@7 -- # uname -s 00:21:58.839 14:28:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.839 14:28:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.839 14:28:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.839 14:28:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.839 14:28:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.839 14:28:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.839 14:28:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.839 14:28:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.839 14:28:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.839 14:28:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.840 14:28:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:21:58.840 14:28:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:21:58.840 14:28:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.840 14:28:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.840 14:28:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:58.840 14:28:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.840 14:28:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.840 14:28:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.840 14:28:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.840 14:28:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.840 14:28:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.840 14:28:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.840 14:28:04 -- paths/export.sh@5 -- # export PATH 00:21:58.840 14:28:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.840 14:28:04 -- nvmf/common.sh@46 -- # : 0 00:21:58.840 14:28:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:58.840 14:28:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:58.840 14:28:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:58.840 14:28:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.840 14:28:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.840 14:28:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:58.840 14:28:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:58.840 14:28:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:58.840 14:28:04 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:58.840 14:28:04 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:58.840 14:28:04 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.840 14:28:04 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.840 14:28:04 -- host/failover.sh@18 -- # nvmftestinit 00:21:58.840 14:28:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:58.840 14:28:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.840 14:28:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:58.840 14:28:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:58.840 14:28:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:58.840 14:28:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.840 14:28:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.840 14:28:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.840 14:28:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:58.840 14:28:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:58.840 14:28:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:58.840 14:28:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:58.840 14:28:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:58.840 14:28:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:58.840 14:28:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.840 14:28:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.840 14:28:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:58.840 14:28:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:58.840 14:28:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:58.840 14:28:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:58.840 14:28:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:58.840 14:28:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.840 14:28:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:58.840 14:28:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:58.840 14:28:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:58.840 14:28:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:58.840 14:28:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:58.840 14:28:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:58.840 Cannot find device "nvmf_tgt_br" 00:21:58.840 14:28:04 -- nvmf/common.sh@154 -- # true 00:21:58.840 14:28:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.840 Cannot find device "nvmf_tgt_br2" 00:21:58.840 14:28:04 -- nvmf/common.sh@155 -- # true 00:21:58.840 14:28:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:58.840 14:28:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:58.840 Cannot find device "nvmf_tgt_br" 00:21:58.840 14:28:04 -- nvmf/common.sh@157 -- # true 00:21:58.840 14:28:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.106 Cannot find device "nvmf_tgt_br2" 00:21:59.106 14:28:04 -- nvmf/common.sh@158 -- # true 00:21:59.106 14:28:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:59.106 14:28:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:59.106 14:28:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.106 14:28:04 -- nvmf/common.sh@161 -- # true 00:21:59.106 14:28:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.106 14:28:04 -- nvmf/common.sh@162 -- # true 00:21:59.106 14:28:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.106 14:28:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.106 14:28:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.106 14:28:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.106 14:28:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.106 14:28:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.106 14:28:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.106 14:28:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.106 14:28:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.106 14:28:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:59.106 14:28:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:59.106 14:28:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:59.106 14:28:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:59.106 14:28:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.106 14:28:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.106 14:28:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.106 14:28:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.106 14:28:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.106 14:28:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.106 14:28:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.106 14:28:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.106 14:28:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.106 14:28:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.106 14:28:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:59.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:59.106 00:21:59.106 --- 10.0.0.2 ping statistics --- 00:21:59.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.106 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:59.106 14:28:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:59.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:59.106 00:21:59.106 --- 10.0.0.3 ping statistics --- 00:21:59.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.106 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:59.106 14:28:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:21:59.106 00:21:59.106 --- 10.0.0.1 ping statistics --- 00:21:59.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.106 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:59.106 14:28:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.106 14:28:04 -- nvmf/common.sh@421 -- # return 0 00:21:59.106 14:28:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.106 14:28:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.106 14:28:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.106 14:28:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.106 14:28:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.106 14:28:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.106 14:28:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.423 14:28:04 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:59.423 14:28:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.423 14:28:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.424 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:21:59.424 14:28:04 -- nvmf/common.sh@469 -- # nvmfpid=95534 00:21:59.424 14:28:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:59.424 14:28:04 -- nvmf/common.sh@470 -- # waitforlisten 95534 00:21:59.424 14:28:04 -- common/autotest_common.sh@829 -- # '[' -z 95534 ']' 00:21:59.424 14:28:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.424 14:28:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.424 14:28:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.424 14:28:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.424 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:21:59.424 [2024-12-05 14:28:04.808751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.424 [2024-12-05 14:28:04.808821] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.424 [2024-12-05 14:28:04.945490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:59.424 [2024-12-05 14:28:05.031610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:59.424 [2024-12-05 14:28:05.031829] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.424 [2024-12-05 14:28:05.031849] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.424 [2024-12-05 14:28:05.031862] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.424 [2024-12-05 14:28:05.032064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.424 [2024-12-05 14:28:05.033016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.424 [2024-12-05 14:28:05.033050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.358 14:28:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.358 14:28:05 -- common/autotest_common.sh@862 -- # return 0 00:22:00.358 14:28:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.358 14:28:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.358 14:28:05 -- common/autotest_common.sh@10 -- # set +x 00:22:00.358 14:28:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.358 14:28:05 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.616 [2024-12-05 14:28:06.179245] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.616 14:28:06 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:00.874 Malloc0 00:22:00.874 14:28:06 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.132 14:28:06 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.391 14:28:06 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.650 [2024-12-05 14:28:07.205653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.650 14:28:07 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.908 [2024-12-05 14:28:07.502013] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:01.908 14:28:07 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:02.166 [2024-12-05 14:28:07.786411] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:02.166 14:28:07 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:02.166 14:28:07 -- host/failover.sh@31 -- # bdevperf_pid=95651 00:22:02.166 14:28:07 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.166 14:28:07 -- host/failover.sh@34 -- # waitforlisten 95651 /var/tmp/bdevperf.sock 00:22:02.166 14:28:07 -- common/autotest_common.sh@829 -- # '[' -z 95651 ']' 00:22:02.166 14:28:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.166 14:28:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.166 14:28:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.166 14:28:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.166 14:28:07 -- common/autotest_common.sh@10 -- # set +x 00:22:03.542 14:28:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.542 14:28:08 -- common/autotest_common.sh@862 -- # return 0 00:22:03.542 14:28:08 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.542 NVMe0n1 00:22:03.542 14:28:09 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.799 00:22:04.056 14:28:09 -- host/failover.sh@39 -- # run_test_pid=95693 00:22:04.056 14:28:09 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.056 14:28:09 -- host/failover.sh@41 -- # sleep 1 00:22:04.992 14:28:10 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.250 [2024-12-05 14:28:10.732065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.250 [2024-12-05 14:28:10.732120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.250 [2024-12-05 14:28:10.732131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.250 [2024-12-05 14:28:10.732143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.250 [2024-12-05 14:28:10.732150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.250 [2024-12-05 14:28:10.732159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.250 [2024-12-05 14:28:10.732167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 [2024-12-05 14:28:10.732556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0c90 is same with the state(5) to be set 00:22:05.251 14:28:10 -- host/failover.sh@45 -- # sleep 3 00:22:08.539 14:28:13 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.539 00:22:08.539 14:28:14 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:08.798 [2024-12-05 14:28:14.294577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.798 [2024-12-05 14:28:14.294868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.294998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 [2024-12-05 14:28:14.295282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2380 is same with the state(5) to be set 00:22:08.799 14:28:14 -- host/failover.sh@50 -- # sleep 3 00:22:12.080 14:28:17 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.080 [2024-12-05 14:28:17.571332] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.080 14:28:17 -- host/failover.sh@55 -- # sleep 1 00:22:13.018 14:28:18 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:13.278 [2024-12-05 14:28:18.835561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.278 [2024-12-05 14:28:18.835801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.835977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 [2024-12-05 14:28:18.836227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2a60 is same with the state(5) to be set 00:22:13.279 14:28:18 -- host/failover.sh@59 -- # wait 95693 00:22:19.851 0 00:22:19.851 14:28:24 -- host/failover.sh@61 -- # killprocess 95651 00:22:19.851 14:28:24 -- common/autotest_common.sh@936 -- # '[' -z 95651 ']' 00:22:19.851 14:28:24 -- common/autotest_common.sh@940 -- # kill -0 95651 00:22:19.851 14:28:24 -- common/autotest_common.sh@941 -- # uname 00:22:19.851 14:28:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.851 14:28:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95651 00:22:19.851 killing process with pid 95651 00:22:19.851 14:28:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:19.851 14:28:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:19.851 14:28:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95651' 00:22:19.851 14:28:24 -- common/autotest_common.sh@955 -- # kill 95651 00:22:19.851 14:28:24 -- common/autotest_common.sh@960 -- # wait 95651 00:22:19.851 14:28:24 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:19.851 [2024-12-05 14:28:07.861676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:19.851 [2024-12-05 14:28:07.861794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95651 ] 00:22:19.851 [2024-12-05 14:28:08.006901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.851 [2024-12-05 14:28:08.099610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.851 Running I/O for 15 seconds... 00:22:19.851 [2024-12-05 14:28:10.732804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.732898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.732925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.732952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.732972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.732985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.732999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.851 [2024-12-05 14:28:10.733358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.851 [2024-12-05 14:28:10.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.733905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.733975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.733989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.852 [2024-12-05 14:28:10.734361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.852 [2024-12-05 14:28:10.734527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.852 [2024-12-05 14:28:10.734539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.734896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.734922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.734958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.734985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.734998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.853 [2024-12-05 14:28:10.735633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.853 [2024-12-05 14:28:10.735656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.853 [2024-12-05 14:28:10.735669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.735803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.735864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.735925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.735964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.735978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.736034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.736063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.736425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.736501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.854 [2024-12-05 14:28:10.736549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-12-05 14:28:10.736723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e130 is same with the state(5) to be set 00:22:19.854 [2024-12-05 14:28:10.736750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.854 [2024-12-05 14:28:10.736765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.854 [2024-12-05 14:28:10.736775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17200 len:8 PRP1 0x0 PRP2 0x0 00:22:19.854 [2024-12-05 14:28:10.736785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736869] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x188e130 was disconnected and freed. reset controller. 00:22:19.854 [2024-12-05 14:28:10.736887] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:19.854 [2024-12-05 14:28:10.736953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.854 [2024-12-05 14:28:10.736975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.736989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.854 [2024-12-05 14:28:10.737001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.737014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.854 [2024-12-05 14:28:10.737033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.854 [2024-12-05 14:28:10.737046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.855 [2024-12-05 14:28:10.737057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:10.737068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.855 [2024-12-05 14:28:10.737108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1809cb0 (9): Bad file descriptor 00:22:19.855 [2024-12-05 14:28:10.739350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.855 [2024-12-05 14:28:10.763742] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.855 [2024-12-05 14:28:14.295384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.295968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.295990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-12-05 14:28:14.296413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.855 [2024-12-05 14:28:14.296427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.296978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.296990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.856 [2024-12-05 14:28:14.297527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.856 [2024-12-05 14:28:14.297539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.856 [2024-12-05 14:28:14.297550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.297574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.297746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.297805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.297883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.297911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.297964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.297978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.297990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.298041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.298067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.857 [2024-12-05 14:28:14.298276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.857 [2024-12-05 14:28:14.298716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.857 [2024-12-05 14:28:14.298735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.298769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.298793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.298870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.298923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.298950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.298977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.298990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.858 [2024-12-05 14:28:14.299016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:14.299197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868b10 is same with the state(5) to be set 00:22:19.858 [2024-12-05 14:28:14.299270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.858 [2024-12-05 14:28:14.299287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.858 [2024-12-05 14:28:14.299313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53912 len:8 PRP1 0x0 PRP2 0x0 00:22:19.858 [2024-12-05 14:28:14.299325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299400] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1868b10 was disconnected and freed. reset controller. 00:22:19.858 [2024-12-05 14:28:14.299417] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:19.858 [2024-12-05 14:28:14.299469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.858 [2024-12-05 14:28:14.299494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.858 [2024-12-05 14:28:14.299517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.858 [2024-12-05 14:28:14.299557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.858 [2024-12-05 14:28:14.299580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:14.299592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.858 [2024-12-05 14:28:14.301760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.858 [2024-12-05 14:28:14.301800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1809cb0 (9): Bad file descriptor 00:22:19.858 [2024-12-05 14:28:14.337074] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.858 [2024-12-05 14:28:18.836356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.858 [2024-12-05 14:28:18.836907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.858 [2024-12-05 14:28:18.836926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.836943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.836957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.836988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.859 [2024-12-05 14:28:18.837959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.837973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.859 [2024-12-05 14:28:18.837986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.838000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.838012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.838026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.859 [2024-12-05 14:28:18.838039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.838052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.838079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.859 [2024-12-05 14:28:18.838091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.838113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.859 [2024-12-05 14:28:18.838127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.859 [2024-12-05 14:28:18.838156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.838671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.838981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.838995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.839007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.839045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.839105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.839132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.839193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.860 [2024-12-05 14:28:18.839220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.860 [2024-12-05 14:28:18.839247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.860 [2024-12-05 14:28:18.839260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.839929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.839955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.839969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.840037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.840063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.861 [2024-12-05 14:28:18.840160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.861 [2024-12-05 14:28:18.840407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910060 is same with the state(5) to be set 00:22:19.861 [2024-12-05 14:28:18.840435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.861 [2024-12-05 14:28:18.840444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.861 [2024-12-05 14:28:18.840453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:22:19.861 [2024-12-05 14:28:18.840464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840532] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1910060 was disconnected and freed. reset controller. 00:22:19.861 [2024-12-05 14:28:18.840550] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:19.861 [2024-12-05 14:28:18.840602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.861 [2024-12-05 14:28:18.840629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.861 [2024-12-05 14:28:18.840655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.861 [2024-12-05 14:28:18.840667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.861 [2024-12-05 14:28:18.840679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 14:28:18.840690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.862 [2024-12-05 14:28:18.840701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 14:28:18.840714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.862 [2024-12-05 14:28:18.843000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.862 [2024-12-05 14:28:18.843041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1809cb0 (9): Bad file descriptor 00:22:19.862 [2024-12-05 14:28:18.859510] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.862 00:22:19.862 Latency(us) 00:22:19.862 [2024-12-05T14:28:25.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.862 [2024-12-05T14:28:25.510Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.862 Verification LBA range: start 0x0 length 0x4000 00:22:19.862 NVMe0n1 : 15.01 15042.58 58.76 293.03 0.00 8331.79 525.03 14596.65 00:22:19.862 [2024-12-05T14:28:25.510Z] =================================================================================================================== 00:22:19.862 [2024-12-05T14:28:25.510Z] Total : 15042.58 58.76 293.03 0.00 8331.79 525.03 14596.65 00:22:19.862 Received shutdown signal, test time was about 15.000000 seconds 00:22:19.862 00:22:19.862 Latency(us) 00:22:19.862 [2024-12-05T14:28:25.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.862 [2024-12-05T14:28:25.510Z] =================================================================================================================== 00:22:19.862 [2024-12-05T14:28:25.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.862 14:28:24 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:19.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.862 14:28:24 -- host/failover.sh@65 -- # count=3 00:22:19.862 14:28:24 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:19.862 14:28:24 -- host/failover.sh@73 -- # bdevperf_pid=95897 00:22:19.862 14:28:24 -- host/failover.sh@75 -- # waitforlisten 95897 /var/tmp/bdevperf.sock 00:22:19.862 14:28:24 -- common/autotest_common.sh@829 -- # '[' -z 95897 ']' 00:22:19.862 14:28:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.862 14:28:24 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:19.862 14:28:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.862 14:28:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.862 14:28:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.862 14:28:24 -- common/autotest_common.sh@10 -- # set +x 00:22:20.427 14:28:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.427 14:28:25 -- common/autotest_common.sh@862 -- # return 0 00:22:20.427 14:28:25 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:20.685 [2024-12-05 14:28:26.233239] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:20.685 14:28:26 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:20.943 [2024-12-05 14:28:26.441346] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:20.943 14:28:26 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.201 NVMe0n1 00:22:21.201 14:28:26 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.460 00:22:21.460 14:28:27 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.720 00:22:21.720 14:28:27 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.720 14:28:27 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:21.979 14:28:27 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.238 14:28:27 -- host/failover.sh@87 -- # sleep 3 00:22:25.525 14:28:30 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.525 14:28:30 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:25.525 14:28:31 -- host/failover.sh@90 -- # run_test_pid=96036 00:22:25.525 14:28:31 -- host/failover.sh@92 -- # wait 96036 00:22:25.525 14:28:31 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.902 0 00:22:26.902 14:28:32 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:26.902 [2024-12-05 14:28:24.990555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:26.902 [2024-12-05 14:28:24.990675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95897 ] 00:22:26.902 [2024-12-05 14:28:25.131792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.902 [2024-12-05 14:28:25.223311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.902 [2024-12-05 14:28:27.720054] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:26.902 [2024-12-05 14:28:27.720168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.902 [2024-12-05 14:28:27.720193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.902 [2024-12-05 14:28:27.720212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.902 [2024-12-05 14:28:27.720226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.902 [2024-12-05 14:28:27.720240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.902 [2024-12-05 14:28:27.720268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.902 [2024-12-05 14:28:27.720313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.902 [2024-12-05 14:28:27.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.902 [2024-12-05 14:28:27.720338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.902 [2024-12-05 14:28:27.720406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.902 [2024-12-05 14:28:27.720438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b9cb0 (9): Bad file descriptor 00:22:26.902 [2024-12-05 14:28:27.731201] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.902 Running I/O for 1 seconds... 00:22:26.902 00:22:26.902 Latency(us) 00:22:26.902 [2024-12-05T14:28:32.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.902 [2024-12-05T14:28:32.550Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:26.902 Verification LBA range: start 0x0 length 0x4000 00:22:26.902 NVMe0n1 : 1.01 15221.23 59.46 0.00 0.00 8376.47 975.59 13285.93 00:22:26.902 [2024-12-05T14:28:32.550Z] =================================================================================================================== 00:22:26.902 [2024-12-05T14:28:32.550Z] Total : 15221.23 59.46 0.00 0.00 8376.47 975.59 13285.93 00:22:26.902 14:28:32 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:26.902 14:28:32 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:26.902 14:28:32 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.160 14:28:32 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.160 14:28:32 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:27.419 14:28:32 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.678 14:28:33 -- host/failover.sh@101 -- # sleep 3 00:22:30.963 14:28:36 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.963 14:28:36 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:30.963 14:28:36 -- host/failover.sh@108 -- # killprocess 95897 00:22:30.963 14:28:36 -- common/autotest_common.sh@936 -- # '[' -z 95897 ']' 00:22:30.963 14:28:36 -- common/autotest_common.sh@940 -- # kill -0 95897 00:22:30.963 14:28:36 -- common/autotest_common.sh@941 -- # uname 00:22:30.963 14:28:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.963 14:28:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95897 00:22:30.963 14:28:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.963 14:28:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.963 killing process with pid 95897 00:22:30.963 14:28:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95897' 00:22:30.963 14:28:36 -- common/autotest_common.sh@955 -- # kill 95897 00:22:30.963 14:28:36 -- common/autotest_common.sh@960 -- # wait 95897 00:22:31.222 14:28:36 -- host/failover.sh@110 -- # sync 00:22:31.222 14:28:36 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.480 14:28:37 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:31.480 14:28:37 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:31.480 14:28:37 -- host/failover.sh@116 -- # nvmftestfini 00:22:31.480 14:28:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:31.480 14:28:37 -- nvmf/common.sh@116 -- # sync 00:22:31.480 14:28:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:31.480 14:28:37 -- nvmf/common.sh@119 -- # set +e 00:22:31.480 14:28:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:31.480 14:28:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:31.480 rmmod nvme_tcp 00:22:31.480 rmmod nvme_fabrics 00:22:31.480 rmmod nvme_keyring 00:22:31.480 14:28:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:31.480 14:28:37 -- nvmf/common.sh@123 -- # set -e 00:22:31.480 14:28:37 -- nvmf/common.sh@124 -- # return 0 00:22:31.480 14:28:37 -- nvmf/common.sh@477 -- # '[' -n 95534 ']' 00:22:31.480 14:28:37 -- nvmf/common.sh@478 -- # killprocess 95534 00:22:31.480 14:28:37 -- common/autotest_common.sh@936 -- # '[' -z 95534 ']' 00:22:31.480 14:28:37 -- common/autotest_common.sh@940 -- # kill -0 95534 00:22:31.480 14:28:37 -- common/autotest_common.sh@941 -- # uname 00:22:31.480 14:28:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.480 14:28:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95534 00:22:31.480 14:28:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:31.480 14:28:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:31.480 killing process with pid 95534 00:22:31.480 14:28:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95534' 00:22:31.480 14:28:37 -- common/autotest_common.sh@955 -- # kill 95534 00:22:31.480 14:28:37 -- common/autotest_common.sh@960 -- # wait 95534 00:22:32.047 14:28:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:32.047 14:28:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:32.047 14:28:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:32.047 14:28:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.047 14:28:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.047 14:28:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.047 14:28:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:32.047 00:22:32.047 real 0m33.220s 00:22:32.047 user 2m8.183s 00:22:32.047 sys 0m5.229s 00:22:32.047 14:28:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:32.047 14:28:37 -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 ************************************ 00:22:32.047 END TEST nvmf_failover 00:22:32.047 ************************************ 00:22:32.047 14:28:37 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:32.047 14:28:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:32.047 14:28:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:32.047 14:28:37 -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 ************************************ 00:22:32.047 START TEST nvmf_discovery 00:22:32.047 ************************************ 00:22:32.047 14:28:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:32.047 * Looking for test storage... 00:22:32.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.047 14:28:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:32.047 14:28:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:32.047 14:28:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:32.047 14:28:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:32.047 14:28:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:32.047 14:28:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:32.047 14:28:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:32.047 14:28:37 -- scripts/common.sh@335 -- # IFS=.-: 00:22:32.047 14:28:37 -- scripts/common.sh@335 -- # read -ra ver1 00:22:32.047 14:28:37 -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.047 14:28:37 -- scripts/common.sh@336 -- # read -ra ver2 00:22:32.047 14:28:37 -- scripts/common.sh@337 -- # local 'op=<' 00:22:32.047 14:28:37 -- scripts/common.sh@339 -- # ver1_l=2 00:22:32.047 14:28:37 -- scripts/common.sh@340 -- # ver2_l=1 00:22:32.047 14:28:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:32.047 14:28:37 -- scripts/common.sh@343 -- # case "$op" in 00:22:32.047 14:28:37 -- scripts/common.sh@344 -- # : 1 00:22:32.047 14:28:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:32.047 14:28:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.047 14:28:37 -- scripts/common.sh@364 -- # decimal 1 00:22:32.047 14:28:37 -- scripts/common.sh@352 -- # local d=1 00:22:32.047 14:28:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.047 14:28:37 -- scripts/common.sh@354 -- # echo 1 00:22:32.047 14:28:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:32.047 14:28:37 -- scripts/common.sh@365 -- # decimal 2 00:22:32.047 14:28:37 -- scripts/common.sh@352 -- # local d=2 00:22:32.047 14:28:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.047 14:28:37 -- scripts/common.sh@354 -- # echo 2 00:22:32.047 14:28:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:32.047 14:28:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:32.047 14:28:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:32.047 14:28:37 -- scripts/common.sh@367 -- # return 0 00:22:32.047 14:28:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.047 14:28:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:32.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.047 --rc genhtml_branch_coverage=1 00:22:32.047 --rc genhtml_function_coverage=1 00:22:32.047 --rc genhtml_legend=1 00:22:32.047 --rc geninfo_all_blocks=1 00:22:32.047 --rc geninfo_unexecuted_blocks=1 00:22:32.047 00:22:32.047 ' 00:22:32.047 14:28:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:32.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.047 --rc genhtml_branch_coverage=1 00:22:32.047 --rc genhtml_function_coverage=1 00:22:32.047 --rc genhtml_legend=1 00:22:32.047 --rc geninfo_all_blocks=1 00:22:32.047 --rc geninfo_unexecuted_blocks=1 00:22:32.047 00:22:32.047 ' 00:22:32.047 14:28:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:32.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.047 --rc genhtml_branch_coverage=1 00:22:32.047 --rc genhtml_function_coverage=1 00:22:32.047 --rc genhtml_legend=1 00:22:32.047 --rc geninfo_all_blocks=1 00:22:32.047 --rc geninfo_unexecuted_blocks=1 00:22:32.047 00:22:32.047 ' 00:22:32.047 14:28:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:32.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.047 --rc genhtml_branch_coverage=1 00:22:32.047 --rc genhtml_function_coverage=1 00:22:32.047 --rc genhtml_legend=1 00:22:32.047 --rc geninfo_all_blocks=1 00:22:32.047 --rc geninfo_unexecuted_blocks=1 00:22:32.047 00:22:32.047 ' 00:22:32.047 14:28:37 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.047 14:28:37 -- nvmf/common.sh@7 -- # uname -s 00:22:32.047 14:28:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.047 14:28:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.047 14:28:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.047 14:28:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.047 14:28:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.047 14:28:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.047 14:28:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.047 14:28:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.047 14:28:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.047 14:28:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:22:32.047 14:28:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:22:32.047 14:28:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.047 14:28:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.047 14:28:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.047 14:28:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.047 14:28:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.047 14:28:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.047 14:28:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.047 14:28:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.047 14:28:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.047 14:28:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.047 14:28:37 -- paths/export.sh@5 -- # export PATH 00:22:32.047 14:28:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.047 14:28:37 -- nvmf/common.sh@46 -- # : 0 00:22:32.047 14:28:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:32.047 14:28:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:32.047 14:28:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:32.047 14:28:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.047 14:28:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.047 14:28:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:32.047 14:28:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:32.047 14:28:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:32.047 14:28:37 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:32.047 14:28:37 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:32.047 14:28:37 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:32.047 14:28:37 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:32.047 14:28:37 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:32.047 14:28:37 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:32.047 14:28:37 -- host/discovery.sh@25 -- # nvmftestinit 00:22:32.047 14:28:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:32.047 14:28:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.047 14:28:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:32.047 14:28:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:32.047 14:28:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:32.047 14:28:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.047 14:28:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.047 14:28:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.047 14:28:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:32.047 14:28:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:32.047 14:28:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.047 14:28:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.047 14:28:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:32.047 14:28:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:32.047 14:28:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:32.047 14:28:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:32.047 14:28:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:32.047 14:28:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.047 14:28:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:32.047 14:28:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:32.047 14:28:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:32.047 14:28:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:32.047 14:28:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:32.308 14:28:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:32.308 Cannot find device "nvmf_tgt_br" 00:22:32.308 14:28:37 -- nvmf/common.sh@154 -- # true 00:22:32.308 14:28:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.308 Cannot find device "nvmf_tgt_br2" 00:22:32.308 14:28:37 -- nvmf/common.sh@155 -- # true 00:22:32.308 14:28:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:32.308 14:28:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:32.308 Cannot find device "nvmf_tgt_br" 00:22:32.308 14:28:37 -- nvmf/common.sh@157 -- # true 00:22:32.308 14:28:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:32.308 Cannot find device "nvmf_tgt_br2" 00:22:32.308 14:28:37 -- nvmf/common.sh@158 -- # true 00:22:32.308 14:28:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:32.308 14:28:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:32.308 14:28:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.308 14:28:37 -- nvmf/common.sh@161 -- # true 00:22:32.308 14:28:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.308 14:28:37 -- nvmf/common.sh@162 -- # true 00:22:32.308 14:28:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.308 14:28:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.308 14:28:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.308 14:28:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.308 14:28:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.308 14:28:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.308 14:28:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.308 14:28:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.308 14:28:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:32.308 14:28:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:32.308 14:28:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:32.308 14:28:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:32.308 14:28:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:32.308 14:28:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.308 14:28:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.308 14:28:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.308 14:28:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:32.308 14:28:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:32.308 14:28:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.308 14:28:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.308 14:28:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.576 14:28:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.576 14:28:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.576 14:28:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:32.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:22:32.576 00:22:32.576 --- 10.0.0.2 ping statistics --- 00:22:32.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.576 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:32.576 14:28:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:32.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:22:32.576 00:22:32.576 --- 10.0.0.3 ping statistics --- 00:22:32.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.576 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:32.576 14:28:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:32.576 00:22:32.576 --- 10.0.0.1 ping statistics --- 00:22:32.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.576 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:32.576 14:28:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.576 14:28:37 -- nvmf/common.sh@421 -- # return 0 00:22:32.576 14:28:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:32.576 14:28:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.576 14:28:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:32.576 14:28:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:32.576 14:28:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.576 14:28:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:32.576 14:28:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:32.576 14:28:38 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:32.576 14:28:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:32.576 14:28:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.576 14:28:38 -- common/autotest_common.sh@10 -- # set +x 00:22:32.576 14:28:38 -- nvmf/common.sh@469 -- # nvmfpid=96350 00:22:32.576 14:28:38 -- nvmf/common.sh@470 -- # waitforlisten 96350 00:22:32.576 14:28:38 -- common/autotest_common.sh@829 -- # '[' -z 96350 ']' 00:22:32.576 14:28:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.576 14:28:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.576 14:28:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.576 14:28:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.576 14:28:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.576 14:28:38 -- common/autotest_common.sh@10 -- # set +x 00:22:32.576 [2024-12-05 14:28:38.078317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:32.576 [2024-12-05 14:28:38.078403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.869 [2024-12-05 14:28:38.220737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.869 [2024-12-05 14:28:38.300078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.869 [2024-12-05 14:28:38.300225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.869 [2024-12-05 14:28:38.300239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.869 [2024-12-05 14:28:38.300264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.869 [2024-12-05 14:28:38.300356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.448 14:28:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.448 14:28:39 -- common/autotest_common.sh@862 -- # return 0 00:22:33.448 14:28:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:33.448 14:28:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.448 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 14:28:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.705 14:28:39 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.705 14:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 [2024-12-05 14:28:39.134283] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.705 14:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 14:28:39 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:33.705 14:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 [2024-12-05 14:28:39.142387] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:33.705 14:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 14:28:39 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:33.705 14:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 null0 00:22:33.705 14:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 14:28:39 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:33.705 14:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 null1 00:22:33.705 14:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 14:28:39 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:33.705 14:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.705 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 14:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.705 14:28:39 -- host/discovery.sh@45 -- # hostpid=96400 00:22:33.705 14:28:39 -- host/discovery.sh@46 -- # waitforlisten 96400 /tmp/host.sock 00:22:33.705 14:28:39 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:33.705 14:28:39 -- common/autotest_common.sh@829 -- # '[' -z 96400 ']' 00:22:33.705 14:28:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:33.705 14:28:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.705 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:33.705 14:28:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:33.705 14:28:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.705 14:28:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.705 [2024-12-05 14:28:39.217350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:33.705 [2024-12-05 14:28:39.217409] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96400 ] 00:22:33.705 [2024-12-05 14:28:39.350399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.963 [2024-12-05 14:28:39.432916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:33.963 [2024-12-05 14:28:39.433113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.528 14:28:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.528 14:28:40 -- common/autotest_common.sh@862 -- # return 0 00:22:34.528 14:28:40 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.528 14:28:40 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:34.528 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.528 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.528 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.528 14:28:40 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:34.528 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.528 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@72 -- # notify_id=0 00:22:34.786 14:28:40 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # sort 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # xargs 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:34.786 14:28:40 -- host/discovery.sh@79 -- # get_bdev_list 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # sort 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # xargs 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:34.786 14:28:40 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # sort 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # xargs 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:34.786 14:28:40 -- host/discovery.sh@83 -- # get_bdev_list 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # sort 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- host/discovery.sh@55 -- # xargs 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:34.786 14:28:40 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.786 14:28:40 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.786 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.786 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # sort 00:22:34.786 14:28:40 -- host/discovery.sh@59 -- # xargs 00:22:34.786 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:35.045 14:28:40 -- host/discovery.sh@87 -- # get_bdev_list 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.045 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # sort 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # xargs 00:22:35.045 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.045 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:35.045 14:28:40 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.045 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.045 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.045 [2024-12-05 14:28:40.514676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.045 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:35.045 14:28:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.045 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.045 14:28:40 -- host/discovery.sh@59 -- # sort 00:22:35.045 14:28:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.045 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.045 14:28:40 -- host/discovery.sh@59 -- # xargs 00:22:35.045 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:35.045 14:28:40 -- host/discovery.sh@93 -- # get_bdev_list 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # sort 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.045 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.045 14:28:40 -- host/discovery.sh@55 -- # xargs 00:22:35.045 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.045 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:35.045 14:28:40 -- host/discovery.sh@94 -- # get_notification_count 00:22:35.045 14:28:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:35.045 14:28:40 -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.045 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.045 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.045 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@74 -- # notification_count=0 00:22:35.045 14:28:40 -- host/discovery.sh@75 -- # notify_id=0 00:22:35.045 14:28:40 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:35.045 14:28:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.045 14:28:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.045 14:28:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.045 14:28:40 -- host/discovery.sh@100 -- # sleep 1 00:22:35.612 [2024-12-05 14:28:41.187385] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:35.612 [2024-12-05 14:28:41.187418] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:35.612 [2024-12-05 14:28:41.187436] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.870 [2024-12-05 14:28:41.273523] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:35.870 [2024-12-05 14:28:41.329198] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:35.870 [2024-12-05 14:28:41.329229] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.129 14:28:41 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:36.129 14:28:41 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.129 14:28:41 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.129 14:28:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.129 14:28:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.129 14:28:41 -- host/discovery.sh@59 -- # sort 00:22:36.129 14:28:41 -- host/discovery.sh@59 -- # xargs 00:22:36.129 14:28:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.129 14:28:41 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.129 14:28:41 -- host/discovery.sh@102 -- # get_bdev_list 00:22:36.129 14:28:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.129 14:28:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.129 14:28:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.129 14:28:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.129 14:28:41 -- host/discovery.sh@55 -- # sort 00:22:36.129 14:28:41 -- host/discovery.sh@55 -- # xargs 00:22:36.129 14:28:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:36.388 14:28:41 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.388 14:28:41 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.388 14:28:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.388 14:28:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.388 14:28:41 -- host/discovery.sh@63 -- # sort -n 00:22:36.388 14:28:41 -- host/discovery.sh@63 -- # xargs 00:22:36.388 14:28:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@104 -- # get_notification_count 00:22:36.388 14:28:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:36.388 14:28:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.388 14:28:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.388 14:28:41 -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.388 14:28:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@74 -- # notification_count=1 00:22:36.388 14:28:41 -- host/discovery.sh@75 -- # notify_id=1 00:22:36.388 14:28:41 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:36.388 14:28:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.388 14:28:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.388 14:28:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.388 14:28:41 -- host/discovery.sh@109 -- # sleep 1 00:22:37.324 14:28:42 -- host/discovery.sh@110 -- # get_bdev_list 00:22:37.324 14:28:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.324 14:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.324 14:28:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.324 14:28:42 -- common/autotest_common.sh@10 -- # set +x 00:22:37.324 14:28:42 -- host/discovery.sh@55 -- # sort 00:22:37.324 14:28:42 -- host/discovery.sh@55 -- # xargs 00:22:37.324 14:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.582 14:28:42 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:37.582 14:28:42 -- host/discovery.sh@111 -- # get_notification_count 00:22:37.582 14:28:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:37.582 14:28:42 -- host/discovery.sh@74 -- # jq '. | length' 00:22:37.582 14:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.582 14:28:42 -- common/autotest_common.sh@10 -- # set +x 00:22:37.582 14:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.582 14:28:43 -- host/discovery.sh@74 -- # notification_count=1 00:22:37.582 14:28:43 -- host/discovery.sh@75 -- # notify_id=2 00:22:37.582 14:28:43 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:37.582 14:28:43 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:37.582 14:28:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.583 14:28:43 -- common/autotest_common.sh@10 -- # set +x 00:22:37.583 [2024-12-05 14:28:43.040535] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.583 [2024-12-05 14:28:43.041137] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:37.583 [2024-12-05 14:28:43.041168] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:37.583 14:28:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.583 14:28:43 -- host/discovery.sh@117 -- # sleep 1 00:22:37.583 [2024-12-05 14:28:43.127214] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:37.583 [2024-12-05 14:28:43.189823] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:37.583 [2024-12-05 14:28:43.189851] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:37.583 [2024-12-05 14:28:43.189857] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:38.519 14:28:44 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:38.519 14:28:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.519 14:28:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.519 14:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.519 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 14:28:44 -- host/discovery.sh@59 -- # sort 00:22:38.519 14:28:44 -- host/discovery.sh@59 -- # xargs 00:22:38.519 14:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.519 14:28:44 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.519 14:28:44 -- host/discovery.sh@119 -- # get_bdev_list 00:22:38.519 14:28:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.519 14:28:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.519 14:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.519 14:28:44 -- host/discovery.sh@55 -- # sort 00:22:38.519 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 14:28:44 -- host/discovery.sh@55 -- # xargs 00:22:38.519 14:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.519 14:28:44 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:38.519 14:28:44 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:38.519 14:28:44 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:38.519 14:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.519 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 14:28:44 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:38.519 14:28:44 -- host/discovery.sh@63 -- # sort -n 00:22:38.519 14:28:44 -- host/discovery.sh@63 -- # xargs 00:22:38.778 14:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.778 14:28:44 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:38.778 14:28:44 -- host/discovery.sh@121 -- # get_notification_count 00:22:38.778 14:28:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:38.778 14:28:44 -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.778 14:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.778 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:22:38.778 14:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.778 14:28:44 -- host/discovery.sh@74 -- # notification_count=0 00:22:38.778 14:28:44 -- host/discovery.sh@75 -- # notify_id=2 00:22:38.778 14:28:44 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:38.778 14:28:44 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:38.778 14:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.778 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:22:38.778 [2024-12-05 14:28:44.269733] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:38.778 [2024-12-05 14:28:44.269766] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:38.778 14:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.778 14:28:44 -- host/discovery.sh@127 -- # sleep 1 00:22:38.778 [2024-12-05 14:28:44.277647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.778 [2024-12-05 14:28:44.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.778 [2024-12-05 14:28:44.277710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.778 [2024-12-05 14:28:44.277718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.778 [2024-12-05 14:28:44.277728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.778 [2024-12-05 14:28:44.277737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.778 [2024-12-05 14:28:44.277753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.778 [2024-12-05 14:28:44.277762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.778 [2024-12-05 14:28:44.277770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.778 [2024-12-05 14:28:44.287601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.778 [2024-12-05 14:28:44.297619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.778 [2024-12-05 14:28:44.297713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.778 [2024-12-05 14:28:44.297763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.778 [2024-12-05 14:28:44.297781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3570 with addr=10.0.0.2, port=4420 00:22:38.778 [2024-12-05 14:28:44.297792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.778 [2024-12-05 14:28:44.297822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.778 [2024-12-05 14:28:44.297868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.779 [2024-12-05 14:28:44.297880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.779 [2024-12-05 14:28:44.297890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.779 [2024-12-05 14:28:44.297906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.779 [2024-12-05 14:28:44.307670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.779 [2024-12-05 14:28:44.307747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.307794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.307857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3570 with addr=10.0.0.2, port=4420 00:22:38.779 [2024-12-05 14:28:44.307870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.779 [2024-12-05 14:28:44.307887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.779 [2024-12-05 14:28:44.307914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.779 [2024-12-05 14:28:44.307925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.779 [2024-12-05 14:28:44.307935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.779 [2024-12-05 14:28:44.307950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.779 [2024-12-05 14:28:44.317720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.779 [2024-12-05 14:28:44.317815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.317883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.317902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3570 with addr=10.0.0.2, port=4420 00:22:38.779 [2024-12-05 14:28:44.317912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.779 [2024-12-05 14:28:44.317928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.779 [2024-12-05 14:28:44.317960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.779 [2024-12-05 14:28:44.317972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.779 [2024-12-05 14:28:44.317981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.779 [2024-12-05 14:28:44.317995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.779 [2024-12-05 14:28:44.327771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.779 [2024-12-05 14:28:44.327876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.327926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.327944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3570 with addr=10.0.0.2, port=4420 00:22:38.779 [2024-12-05 14:28:44.327954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.779 [2024-12-05 14:28:44.327980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.779 [2024-12-05 14:28:44.328008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.779 [2024-12-05 14:28:44.328019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.779 [2024-12-05 14:28:44.328027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.779 [2024-12-05 14:28:44.328042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.779 [2024-12-05 14:28:44.337845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.779 [2024-12-05 14:28:44.337922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.337968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.337985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3570 with addr=10.0.0.2, port=4420 00:22:38.779 [2024-12-05 14:28:44.337995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.779 [2024-12-05 14:28:44.338010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.779 [2024-12-05 14:28:44.338034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.779 [2024-12-05 14:28:44.338051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.779 [2024-12-05 14:28:44.338060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.779 [2024-12-05 14:28:44.338073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.779 [2024-12-05 14:28:44.347904] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.779 [2024-12-05 14:28:44.347994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.348044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.779 [2024-12-05 14:28:44.348063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3570 with addr=10.0.0.2, port=4420 00:22:38.779 [2024-12-05 14:28:44.348074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3570 is same with the state(5) to be set 00:22:38.779 [2024-12-05 14:28:44.348091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3570 (9): Bad file descriptor 00:22:38.779 [2024-12-05 14:28:44.348119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.779 [2024-12-05 14:28:44.348130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.779 [2024-12-05 14:28:44.348139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.779 [2024-12-05 14:28:44.348155] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.779 [2024-12-05 14:28:44.356151] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:38.779 [2024-12-05 14:28:44.356202] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:39.716 14:28:45 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:39.716 14:28:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.716 14:28:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.716 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.716 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.716 14:28:45 -- host/discovery.sh@59 -- # sort 00:22:39.716 14:28:45 -- host/discovery.sh@59 -- # xargs 00:22:39.716 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.716 14:28:45 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.716 14:28:45 -- host/discovery.sh@129 -- # get_bdev_list 00:22:39.716 14:28:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.716 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.716 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.716 14:28:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.716 14:28:45 -- host/discovery.sh@55 -- # sort 00:22:39.716 14:28:45 -- host/discovery.sh@55 -- # xargs 00:22:39.973 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:39.973 14:28:45 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.973 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.973 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.973 14:28:45 -- host/discovery.sh@63 -- # sort -n 00:22:39.973 14:28:45 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.973 14:28:45 -- host/discovery.sh@63 -- # xargs 00:22:39.973 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@131 -- # get_notification_count 00:22:39.973 14:28:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:39.973 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.973 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.973 14:28:45 -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.973 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@74 -- # notification_count=0 00:22:39.973 14:28:45 -- host/discovery.sh@75 -- # notify_id=2 00:22:39.973 14:28:45 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:39.973 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.973 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.973 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.973 14:28:45 -- host/discovery.sh@135 -- # sleep 1 00:22:40.908 14:28:46 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:40.908 14:28:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.908 14:28:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.908 14:28:46 -- common/autotest_common.sh@10 -- # set +x 00:22:40.908 14:28:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.908 14:28:46 -- host/discovery.sh@59 -- # sort 00:22:40.908 14:28:46 -- host/discovery.sh@59 -- # xargs 00:22:40.908 14:28:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.167 14:28:46 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:41.167 14:28:46 -- host/discovery.sh@137 -- # get_bdev_list 00:22:41.167 14:28:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.167 14:28:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.167 14:28:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.167 14:28:46 -- common/autotest_common.sh@10 -- # set +x 00:22:41.167 14:28:46 -- host/discovery.sh@55 -- # sort 00:22:41.167 14:28:46 -- host/discovery.sh@55 -- # xargs 00:22:41.167 14:28:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.167 14:28:46 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:41.167 14:28:46 -- host/discovery.sh@138 -- # get_notification_count 00:22:41.167 14:28:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:41.167 14:28:46 -- host/discovery.sh@74 -- # jq '. | length' 00:22:41.167 14:28:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.167 14:28:46 -- common/autotest_common.sh@10 -- # set +x 00:22:41.167 14:28:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.167 14:28:46 -- host/discovery.sh@74 -- # notification_count=2 00:22:41.167 14:28:46 -- host/discovery.sh@75 -- # notify_id=4 00:22:41.167 14:28:46 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:41.167 14:28:46 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:41.167 14:28:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.167 14:28:46 -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 [2024-12-05 14:28:47.697918] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:42.102 [2024-12-05 14:28:47.697945] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:42.102 [2024-12-05 14:28:47.697982] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.361 [2024-12-05 14:28:47.784022] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:42.361 [2024-12-05 14:28:47.843084] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:42.361 [2024-12-05 14:28:47.843126] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:42.361 14:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.361 14:28:47 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.361 14:28:47 -- common/autotest_common.sh@650 -- # local es=0 00:22:42.361 14:28:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.361 14:28:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:42.361 14:28:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.361 14:28:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:42.361 14:28:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.361 14:28:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.361 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.361 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:22:42.361 2024/12/05 14:28:47 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:42.361 request: 00:22:42.361 { 00:22:42.361 "method": "bdev_nvme_start_discovery", 00:22:42.361 "params": { 00:22:42.361 "name": "nvme", 00:22:42.361 "trtype": "tcp", 00:22:42.361 "traddr": "10.0.0.2", 00:22:42.361 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:42.361 "adrfam": "ipv4", 00:22:42.361 "trsvcid": "8009", 00:22:42.361 "wait_for_attach": true 00:22:42.361 } 00:22:42.361 } 00:22:42.361 Got JSON-RPC error response 00:22:42.361 GoRPCClient: error on JSON-RPC call 00:22:42.361 14:28:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:42.361 14:28:47 -- common/autotest_common.sh@653 -- # es=1 00:22:42.361 14:28:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.361 14:28:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.361 14:28:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.361 14:28:47 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:42.361 14:28:47 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:42.361 14:28:47 -- host/discovery.sh@67 -- # sort 00:22:42.361 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.361 14:28:47 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:42.361 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:22:42.361 14:28:47 -- host/discovery.sh@67 -- # xargs 00:22:42.361 14:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.361 14:28:47 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:42.361 14:28:47 -- host/discovery.sh@147 -- # get_bdev_list 00:22:42.361 14:28:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.361 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.361 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:22:42.361 14:28:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.361 14:28:47 -- host/discovery.sh@55 -- # sort 00:22:42.361 14:28:47 -- host/discovery.sh@55 -- # xargs 00:22:42.361 14:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.361 14:28:47 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:42.361 14:28:47 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.361 14:28:47 -- common/autotest_common.sh@650 -- # local es=0 00:22:42.361 14:28:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.361 14:28:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:42.361 14:28:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.361 14:28:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:42.361 14:28:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.361 14:28:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.361 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.361 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:22:42.361 2024/12/05 14:28:47 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:42.361 request: 00:22:42.361 { 00:22:42.361 "method": "bdev_nvme_start_discovery", 00:22:42.362 "params": { 00:22:42.362 "name": "nvme_second", 00:22:42.362 "trtype": "tcp", 00:22:42.362 "traddr": "10.0.0.2", 00:22:42.362 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:42.362 "adrfam": "ipv4", 00:22:42.362 "trsvcid": "8009", 00:22:42.362 "wait_for_attach": true 00:22:42.362 } 00:22:42.362 } 00:22:42.362 Got JSON-RPC error response 00:22:42.362 GoRPCClient: error on JSON-RPC call 00:22:42.362 14:28:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:42.362 14:28:47 -- common/autotest_common.sh@653 -- # es=1 00:22:42.362 14:28:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.362 14:28:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.362 14:28:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.362 14:28:47 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:42.362 14:28:47 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:42.362 14:28:47 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:42.362 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.362 14:28:47 -- host/discovery.sh@67 -- # sort 00:22:42.362 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:22:42.362 14:28:47 -- host/discovery.sh@67 -- # xargs 00:22:42.362 14:28:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.621 14:28:48 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:42.621 14:28:48 -- host/discovery.sh@153 -- # get_bdev_list 00:22:42.621 14:28:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.621 14:28:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.621 14:28:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.621 14:28:48 -- host/discovery.sh@55 -- # sort 00:22:42.621 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:42.621 14:28:48 -- host/discovery.sh@55 -- # xargs 00:22:42.621 14:28:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.621 14:28:48 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:42.621 14:28:48 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:42.621 14:28:48 -- common/autotest_common.sh@650 -- # local es=0 00:22:42.621 14:28:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:42.621 14:28:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:42.621 14:28:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.621 14:28:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:42.621 14:28:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.621 14:28:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:42.621 14:28:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.621 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:22:43.558 [2024-12-05 14:28:49.093168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.558 [2024-12-05 14:28:49.093244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.558 [2024-12-05 14:28:49.093265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125ef80 with addr=10.0.0.2, port=8010 00:22:43.558 [2024-12-05 14:28:49.093281] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:43.558 [2024-12-05 14:28:49.093291] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:43.558 [2024-12-05 14:28:49.093300] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:44.494 [2024-12-05 14:28:50.093189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.494 [2024-12-05 14:28:50.093272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.494 [2024-12-05 14:28:50.093293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1237ca0 with addr=10.0.0.2, port=8010 00:22:44.494 [2024-12-05 14:28:50.093312] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:44.494 [2024-12-05 14:28:50.093322] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:44.494 [2024-12-05 14:28:50.093331] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:45.871 [2024-12-05 14:28:51.093070] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:45.871 2024/12/05 14:28:51 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:45.871 request: 00:22:45.871 { 00:22:45.871 "method": "bdev_nvme_start_discovery", 00:22:45.871 "params": { 00:22:45.871 "name": "nvme_second", 00:22:45.871 "trtype": "tcp", 00:22:45.871 "traddr": "10.0.0.2", 00:22:45.871 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:45.871 "adrfam": "ipv4", 00:22:45.871 "trsvcid": "8010", 00:22:45.871 "attach_timeout_ms": 3000 00:22:45.871 } 00:22:45.871 } 00:22:45.871 Got JSON-RPC error response 00:22:45.871 GoRPCClient: error on JSON-RPC call 00:22:45.871 14:28:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:45.871 14:28:51 -- common/autotest_common.sh@653 -- # es=1 00:22:45.871 14:28:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.871 14:28:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.871 14:28:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.871 14:28:51 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:45.871 14:28:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:45.871 14:28:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.871 14:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:45.871 14:28:51 -- host/discovery.sh@67 -- # sort 00:22:45.871 14:28:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:45.871 14:28:51 -- host/discovery.sh@67 -- # xargs 00:22:45.871 14:28:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.871 14:28:51 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:45.871 14:28:51 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:45.871 14:28:51 -- host/discovery.sh@162 -- # kill 96400 00:22:45.871 14:28:51 -- host/discovery.sh@163 -- # nvmftestfini 00:22:45.871 14:28:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:45.871 14:28:51 -- nvmf/common.sh@116 -- # sync 00:22:45.871 14:28:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:45.871 14:28:51 -- nvmf/common.sh@119 -- # set +e 00:22:45.871 14:28:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:45.871 14:28:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:45.871 rmmod nvme_tcp 00:22:45.871 rmmod nvme_fabrics 00:22:45.871 rmmod nvme_keyring 00:22:45.871 14:28:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:45.871 14:28:51 -- nvmf/common.sh@123 -- # set -e 00:22:45.871 14:28:51 -- nvmf/common.sh@124 -- # return 0 00:22:45.871 14:28:51 -- nvmf/common.sh@477 -- # '[' -n 96350 ']' 00:22:45.871 14:28:51 -- nvmf/common.sh@478 -- # killprocess 96350 00:22:45.871 14:28:51 -- common/autotest_common.sh@936 -- # '[' -z 96350 ']' 00:22:45.871 14:28:51 -- common/autotest_common.sh@940 -- # kill -0 96350 00:22:45.871 14:28:51 -- common/autotest_common.sh@941 -- # uname 00:22:45.871 14:28:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.871 14:28:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96350 00:22:45.871 14:28:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:45.871 killing process with pid 96350 00:22:45.872 14:28:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:45.872 14:28:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96350' 00:22:45.872 14:28:51 -- common/autotest_common.sh@955 -- # kill 96350 00:22:45.872 14:28:51 -- common/autotest_common.sh@960 -- # wait 96350 00:22:46.129 14:28:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:46.129 14:28:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:46.129 14:28:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:46.129 14:28:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.129 14:28:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:46.129 14:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.129 14:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.129 14:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.129 14:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:46.129 00:22:46.129 real 0m14.130s 00:22:46.129 user 0m27.401s 00:22:46.129 sys 0m1.758s 00:22:46.129 14:28:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:46.129 14:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:46.129 ************************************ 00:22:46.129 END TEST nvmf_discovery 00:22:46.129 ************************************ 00:22:46.129 14:28:51 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:46.129 14:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:46.129 14:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:46.129 14:28:51 -- common/autotest_common.sh@10 -- # set +x 00:22:46.129 ************************************ 00:22:46.129 START TEST nvmf_discovery_remove_ifc 00:22:46.129 ************************************ 00:22:46.129 14:28:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:46.129 * Looking for test storage... 00:22:46.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:46.129 14:28:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:46.129 14:28:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:46.129 14:28:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:46.386 14:28:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:46.386 14:28:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:46.386 14:28:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:46.386 14:28:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:46.386 14:28:51 -- scripts/common.sh@335 -- # IFS=.-: 00:22:46.386 14:28:51 -- scripts/common.sh@335 -- # read -ra ver1 00:22:46.386 14:28:51 -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.386 14:28:51 -- scripts/common.sh@336 -- # read -ra ver2 00:22:46.386 14:28:51 -- scripts/common.sh@337 -- # local 'op=<' 00:22:46.386 14:28:51 -- scripts/common.sh@339 -- # ver1_l=2 00:22:46.386 14:28:51 -- scripts/common.sh@340 -- # ver2_l=1 00:22:46.386 14:28:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:46.386 14:28:51 -- scripts/common.sh@343 -- # case "$op" in 00:22:46.386 14:28:51 -- scripts/common.sh@344 -- # : 1 00:22:46.386 14:28:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:46.386 14:28:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.386 14:28:51 -- scripts/common.sh@364 -- # decimal 1 00:22:46.386 14:28:51 -- scripts/common.sh@352 -- # local d=1 00:22:46.386 14:28:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.386 14:28:51 -- scripts/common.sh@354 -- # echo 1 00:22:46.386 14:28:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:46.386 14:28:51 -- scripts/common.sh@365 -- # decimal 2 00:22:46.386 14:28:51 -- scripts/common.sh@352 -- # local d=2 00:22:46.386 14:28:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.386 14:28:51 -- scripts/common.sh@354 -- # echo 2 00:22:46.386 14:28:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:46.386 14:28:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:46.386 14:28:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:46.386 14:28:51 -- scripts/common.sh@367 -- # return 0 00:22:46.386 14:28:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.386 14:28:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:46.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.386 --rc genhtml_branch_coverage=1 00:22:46.386 --rc genhtml_function_coverage=1 00:22:46.386 --rc genhtml_legend=1 00:22:46.387 --rc geninfo_all_blocks=1 00:22:46.387 --rc geninfo_unexecuted_blocks=1 00:22:46.387 00:22:46.387 ' 00:22:46.387 14:28:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:46.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.387 --rc genhtml_branch_coverage=1 00:22:46.387 --rc genhtml_function_coverage=1 00:22:46.387 --rc genhtml_legend=1 00:22:46.387 --rc geninfo_all_blocks=1 00:22:46.387 --rc geninfo_unexecuted_blocks=1 00:22:46.387 00:22:46.387 ' 00:22:46.387 14:28:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:46.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.387 --rc genhtml_branch_coverage=1 00:22:46.387 --rc genhtml_function_coverage=1 00:22:46.387 --rc genhtml_legend=1 00:22:46.387 --rc geninfo_all_blocks=1 00:22:46.387 --rc geninfo_unexecuted_blocks=1 00:22:46.387 00:22:46.387 ' 00:22:46.387 14:28:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:46.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.387 --rc genhtml_branch_coverage=1 00:22:46.387 --rc genhtml_function_coverage=1 00:22:46.387 --rc genhtml_legend=1 00:22:46.387 --rc geninfo_all_blocks=1 00:22:46.387 --rc geninfo_unexecuted_blocks=1 00:22:46.387 00:22:46.387 ' 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.387 14:28:51 -- nvmf/common.sh@7 -- # uname -s 00:22:46.387 14:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.387 14:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.387 14:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.387 14:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.387 14:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.387 14:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.387 14:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.387 14:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.387 14:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.387 14:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.387 14:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:22:46.387 14:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:22:46.387 14:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.387 14:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.387 14:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.387 14:28:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.387 14:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.387 14:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.387 14:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.387 14:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.387 14:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.387 14:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.387 14:28:51 -- paths/export.sh@5 -- # export PATH 00:22:46.387 14:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.387 14:28:51 -- nvmf/common.sh@46 -- # : 0 00:22:46.387 14:28:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:46.387 14:28:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:46.387 14:28:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:46.387 14:28:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.387 14:28:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.387 14:28:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:46.387 14:28:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:46.387 14:28:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:46.387 14:28:51 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:46.387 14:28:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:46.387 14:28:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.387 14:28:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:46.387 14:28:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:46.387 14:28:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:46.387 14:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.387 14:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.387 14:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.387 14:28:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:46.387 14:28:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:46.387 14:28:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:46.387 14:28:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:46.387 14:28:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:46.387 14:28:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:46.387 14:28:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.387 14:28:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.387 14:28:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:46.387 14:28:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:46.387 14:28:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.387 14:28:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.387 14:28:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.387 14:28:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.387 14:28:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.387 14:28:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.387 14:28:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.387 14:28:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.387 14:28:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:46.387 14:28:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:46.387 Cannot find device "nvmf_tgt_br" 00:22:46.387 14:28:51 -- nvmf/common.sh@154 -- # true 00:22:46.387 14:28:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.387 Cannot find device "nvmf_tgt_br2" 00:22:46.387 14:28:51 -- nvmf/common.sh@155 -- # true 00:22:46.387 14:28:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:46.387 14:28:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:46.387 Cannot find device "nvmf_tgt_br" 00:22:46.387 14:28:51 -- nvmf/common.sh@157 -- # true 00:22:46.387 14:28:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:46.387 Cannot find device "nvmf_tgt_br2" 00:22:46.387 14:28:51 -- nvmf/common.sh@158 -- # true 00:22:46.387 14:28:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:46.387 14:28:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:46.387 14:28:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.387 14:28:52 -- nvmf/common.sh@161 -- # true 00:22:46.387 14:28:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.644 14:28:52 -- nvmf/common.sh@162 -- # true 00:22:46.644 14:28:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.644 14:28:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.644 14:28:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.644 14:28:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.644 14:28:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.644 14:28:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.644 14:28:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.644 14:28:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:46.644 14:28:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:46.644 14:28:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:46.644 14:28:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:46.644 14:28:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:46.644 14:28:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:46.644 14:28:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.644 14:28:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.644 14:28:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:46.644 14:28:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:46.644 14:28:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:46.644 14:28:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.644 14:28:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.644 14:28:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.644 14:28:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.644 14:28:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.644 14:28:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:46.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:22:46.644 00:22:46.644 --- 10.0.0.2 ping statistics --- 00:22:46.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.644 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:46.644 14:28:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:46.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:22:46.644 00:22:46.644 --- 10.0.0.3 ping statistics --- 00:22:46.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.644 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:46.644 14:28:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:22:46.644 00:22:46.644 --- 10.0.0.1 ping statistics --- 00:22:46.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.644 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:46.644 14:28:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.644 14:28:52 -- nvmf/common.sh@421 -- # return 0 00:22:46.644 14:28:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:46.644 14:28:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.644 14:28:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:46.644 14:28:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:46.644 14:28:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.644 14:28:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:46.644 14:28:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:46.644 14:28:52 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:46.644 14:28:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:46.644 14:28:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:46.644 14:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:46.644 14:28:52 -- nvmf/common.sh@469 -- # nvmfpid=96914 00:22:46.644 14:28:52 -- nvmf/common.sh@470 -- # waitforlisten 96914 00:22:46.644 14:28:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.644 14:28:52 -- common/autotest_common.sh@829 -- # '[' -z 96914 ']' 00:22:46.644 14:28:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.644 14:28:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.644 14:28:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.644 14:28:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.645 14:28:52 -- common/autotest_common.sh@10 -- # set +x 00:22:46.902 [2024-12-05 14:28:52.328537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:46.902 [2024-12-05 14:28:52.328630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.902 [2024-12-05 14:28:52.472129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.159 [2024-12-05 14:28:52.547915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.159 [2024-12-05 14:28:52.548108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.159 [2024-12-05 14:28:52.548127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.159 [2024-12-05 14:28:52.548139] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.159 [2024-12-05 14:28:52.548181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.093 14:28:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.093 14:28:53 -- common/autotest_common.sh@862 -- # return 0 00:22:48.093 14:28:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.094 14:28:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.094 14:28:53 -- common/autotest_common.sh@10 -- # set +x 00:22:48.094 14:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.094 14:28:53 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:48.094 14:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.094 14:28:53 -- common/autotest_common.sh@10 -- # set +x 00:22:48.094 [2024-12-05 14:28:53.446578] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.094 [2024-12-05 14:28:53.454682] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:48.094 null0 00:22:48.094 [2024-12-05 14:28:53.486623] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.094 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:48.094 14:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.094 14:28:53 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96964 00:22:48.094 14:28:53 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:48.094 14:28:53 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96964 /tmp/host.sock 00:22:48.094 14:28:53 -- common/autotest_common.sh@829 -- # '[' -z 96964 ']' 00:22:48.094 14:28:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:48.094 14:28:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.094 14:28:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:48.094 14:28:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.094 14:28:53 -- common/autotest_common.sh@10 -- # set +x 00:22:48.094 [2024-12-05 14:28:53.553728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.094 [2024-12-05 14:28:53.554035] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96964 ] 00:22:48.094 [2024-12-05 14:28:53.691351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.352 [2024-12-05 14:28:53.757943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.352 [2024-12-05 14:28:53.758470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.352 14:28:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.352 14:28:53 -- common/autotest_common.sh@862 -- # return 0 00:22:48.352 14:28:53 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.352 14:28:53 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:48.352 14:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.352 14:28:53 -- common/autotest_common.sh@10 -- # set +x 00:22:48.352 14:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.352 14:28:53 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:48.352 14:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.352 14:28:53 -- common/autotest_common.sh@10 -- # set +x 00:22:48.352 14:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.352 14:28:53 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:48.352 14:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.352 14:28:53 -- common/autotest_common.sh@10 -- # set +x 00:22:49.290 [2024-12-05 14:28:54.909660] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:49.290 [2024-12-05 14:28:54.909689] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:49.290 [2024-12-05 14:28:54.909706] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.548 [2024-12-05 14:28:54.995756] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:49.548 [2024-12-05 14:28:55.051247] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:49.548 [2024-12-05 14:28:55.051428] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:49.548 [2024-12-05 14:28:55.051467] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:49.548 [2024-12-05 14:28:55.051485] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:49.548 [2024-12-05 14:28:55.051506] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:49.548 14:28:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.548 [2024-12-05 14:28:55.058278] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x214eda0 was disconnected and fre 14:28:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.548 ed. delete nvme_qpair. 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.548 14:28:55 -- common/autotest_common.sh@10 -- # set +x 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.548 14:28:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.548 14:28:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.548 14:28:55 -- common/autotest_common.sh@10 -- # set +x 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.548 14:28:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.548 14:28:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.924 14:28:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.924 14:28:56 -- common/autotest_common.sh@10 -- # set +x 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.924 14:28:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.924 14:28:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.861 14:28:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.861 14:28:57 -- common/autotest_common.sh@10 -- # set +x 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.861 14:28:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.861 14:28:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.797 14:28:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.797 14:28:58 -- common/autotest_common.sh@10 -- # set +x 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.797 14:28:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.797 14:28:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.735 14:28:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.735 14:28:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.735 14:28:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.735 14:28:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.735 14:28:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.735 14:28:59 -- common/autotest_common.sh@10 -- # set +x 00:22:53.735 14:28:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.735 14:28:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.994 14:28:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:53.994 14:28:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.931 14:29:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.931 14:29:00 -- common/autotest_common.sh@10 -- # set +x 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.931 14:29:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:54.931 14:29:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.931 [2024-12-05 14:29:00.479434] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:54.931 [2024-12-05 14:29:00.479637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.931 [2024-12-05 14:29:00.479656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.931 [2024-12-05 14:29:00.479668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.931 [2024-12-05 14:29:00.479677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.931 [2024-12-05 14:29:00.479685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.931 [2024-12-05 14:29:00.479693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.931 [2024-12-05 14:29:00.479703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.931 [2024-12-05 14:29:00.479711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.931 [2024-12-05 14:29:00.479719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.931 [2024-12-05 14:29:00.479728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.931 [2024-12-05 14:29:00.479736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b8690 is same with the state(5) to be set 00:22:54.931 [2024-12-05 14:29:00.489431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b8690 (9): Bad file descriptor 00:22:54.931 [2024-12-05 14:29:00.499455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:55.868 14:29:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.868 14:29:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.868 14:29:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.868 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.868 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:22:55.868 14:29:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.868 14:29:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.868 [2024-12-05 14:29:01.509065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:57.246 [2024-12-05 14:29:02.530931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:57.246 [2024-12-05 14:29:02.531026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b8690 with addr=10.0.0.2, port=4420 00:22:57.246 [2024-12-05 14:29:02.531061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b8690 is same with the state(5) to be set 00:22:57.246 [2024-12-05 14:29:02.531111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:57.246 [2024-12-05 14:29:02.531133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:57.246 [2024-12-05 14:29:02.531152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:57.246 [2024-12-05 14:29:02.531172] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:57.246 [2024-12-05 14:29:02.531956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b8690 (9): Bad file descriptor 00:22:57.246 [2024-12-05 14:29:02.532065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:57.246 [2024-12-05 14:29:02.532119] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:57.246 [2024-12-05 14:29:02.532185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.246 [2024-12-05 14:29:02.532213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.246 [2024-12-05 14:29:02.532238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.247 [2024-12-05 14:29:02.532258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.247 [2024-12-05 14:29:02.532280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.247 [2024-12-05 14:29:02.532300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.247 [2024-12-05 14:29:02.532321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.247 [2024-12-05 14:29:02.532341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.247 [2024-12-05 14:29:02.532363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.247 [2024-12-05 14:29:02.532383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.247 [2024-12-05 14:29:02.532402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:57.247 [2024-12-05 14:29:02.532463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116410 (9): Bad file descriptor 00:22:57.247 [2024-12-05 14:29:02.533461] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:57.247 [2024-12-05 14:29:02.533502] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:57.247 14:29:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.247 14:29:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:57.247 14:29:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.183 14:29:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.183 14:29:03 -- common/autotest_common.sh@10 -- # set +x 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.183 14:29:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.183 14:29:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.183 14:29:03 -- common/autotest_common.sh@10 -- # set +x 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.183 14:29:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:58.183 14:29:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:59.121 [2024-12-05 14:29:04.541033] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:59.121 [2024-12-05 14:29:04.541053] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:59.121 [2024-12-05 14:29:04.541069] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.121 [2024-12-05 14:29:04.627124] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:59.121 [2024-12-05 14:29:04.682329] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:59.121 [2024-12-05 14:29:04.682371] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:59.121 [2024-12-05 14:29:04.682393] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:59.121 [2024-12-05 14:29:04.682408] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:59.121 [2024-12-05 14:29:04.682416] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:59.121 [2024-12-05 14:29:04.689606] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x211c0c0 was disconnected and freed. delete nvme_qpair. 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.121 14:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.121 14:29:04 -- common/autotest_common.sh@10 -- # set +x 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.121 14:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:59.121 14:29:04 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96964 00:22:59.121 14:29:04 -- common/autotest_common.sh@936 -- # '[' -z 96964 ']' 00:22:59.121 14:29:04 -- common/autotest_common.sh@940 -- # kill -0 96964 00:22:59.121 14:29:04 -- common/autotest_common.sh@941 -- # uname 00:22:59.121 14:29:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.121 14:29:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96964 00:22:59.413 killing process with pid 96964 00:22:59.414 14:29:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:59.414 14:29:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:59.414 14:29:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96964' 00:22:59.414 14:29:04 -- common/autotest_common.sh@955 -- # kill 96964 00:22:59.414 14:29:04 -- common/autotest_common.sh@960 -- # wait 96964 00:22:59.414 14:29:04 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:59.414 14:29:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:59.414 14:29:04 -- nvmf/common.sh@116 -- # sync 00:22:59.698 14:29:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:59.698 14:29:05 -- nvmf/common.sh@119 -- # set +e 00:22:59.698 14:29:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:59.698 14:29:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:59.698 rmmod nvme_tcp 00:22:59.698 rmmod nvme_fabrics 00:22:59.698 rmmod nvme_keyring 00:22:59.698 14:29:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:59.698 14:29:05 -- nvmf/common.sh@123 -- # set -e 00:22:59.698 14:29:05 -- nvmf/common.sh@124 -- # return 0 00:22:59.698 14:29:05 -- nvmf/common.sh@477 -- # '[' -n 96914 ']' 00:22:59.698 14:29:05 -- nvmf/common.sh@478 -- # killprocess 96914 00:22:59.698 14:29:05 -- common/autotest_common.sh@936 -- # '[' -z 96914 ']' 00:22:59.698 14:29:05 -- common/autotest_common.sh@940 -- # kill -0 96914 00:22:59.698 14:29:05 -- common/autotest_common.sh@941 -- # uname 00:22:59.698 14:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.698 14:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96914 00:22:59.698 killing process with pid 96914 00:22:59.698 14:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:59.698 14:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:59.698 14:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96914' 00:22:59.698 14:29:05 -- common/autotest_common.sh@955 -- # kill 96914 00:22:59.698 14:29:05 -- common/autotest_common.sh@960 -- # wait 96914 00:22:59.961 14:29:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:59.961 14:29:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:59.961 14:29:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:59.961 14:29:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.961 14:29:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:59.961 14:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.961 14:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.961 14:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.961 14:29:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:59.961 ************************************ 00:22:59.961 END TEST nvmf_discovery_remove_ifc 00:22:59.961 ************************************ 00:22:59.961 00:22:59.961 real 0m13.753s 00:22:59.961 user 0m23.124s 00:22:59.961 sys 0m1.498s 00:22:59.961 14:29:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:59.961 14:29:05 -- common/autotest_common.sh@10 -- # set +x 00:22:59.961 14:29:05 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:59.961 14:29:05 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:59.961 14:29:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:59.961 14:29:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:59.961 14:29:05 -- common/autotest_common.sh@10 -- # set +x 00:22:59.961 ************************************ 00:22:59.961 START TEST nvmf_digest 00:22:59.961 ************************************ 00:22:59.961 14:29:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:59.961 * Looking for test storage... 00:22:59.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:59.961 14:29:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:59.961 14:29:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:59.962 14:29:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:00.220 14:29:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:00.220 14:29:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:00.220 14:29:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:00.220 14:29:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:00.220 14:29:05 -- scripts/common.sh@335 -- # IFS=.-: 00:23:00.220 14:29:05 -- scripts/common.sh@335 -- # read -ra ver1 00:23:00.220 14:29:05 -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.220 14:29:05 -- scripts/common.sh@336 -- # read -ra ver2 00:23:00.220 14:29:05 -- scripts/common.sh@337 -- # local 'op=<' 00:23:00.220 14:29:05 -- scripts/common.sh@339 -- # ver1_l=2 00:23:00.220 14:29:05 -- scripts/common.sh@340 -- # ver2_l=1 00:23:00.220 14:29:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:00.220 14:29:05 -- scripts/common.sh@343 -- # case "$op" in 00:23:00.220 14:29:05 -- scripts/common.sh@344 -- # : 1 00:23:00.220 14:29:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:00.220 14:29:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.220 14:29:05 -- scripts/common.sh@364 -- # decimal 1 00:23:00.220 14:29:05 -- scripts/common.sh@352 -- # local d=1 00:23:00.220 14:29:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.220 14:29:05 -- scripts/common.sh@354 -- # echo 1 00:23:00.220 14:29:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:00.220 14:29:05 -- scripts/common.sh@365 -- # decimal 2 00:23:00.220 14:29:05 -- scripts/common.sh@352 -- # local d=2 00:23:00.220 14:29:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.220 14:29:05 -- scripts/common.sh@354 -- # echo 2 00:23:00.220 14:29:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:00.220 14:29:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:00.220 14:29:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:00.220 14:29:05 -- scripts/common.sh@367 -- # return 0 00:23:00.220 14:29:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.220 14:29:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.220 --rc genhtml_branch_coverage=1 00:23:00.220 --rc genhtml_function_coverage=1 00:23:00.220 --rc genhtml_legend=1 00:23:00.220 --rc geninfo_all_blocks=1 00:23:00.220 --rc geninfo_unexecuted_blocks=1 00:23:00.220 00:23:00.220 ' 00:23:00.220 14:29:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.220 --rc genhtml_branch_coverage=1 00:23:00.220 --rc genhtml_function_coverage=1 00:23:00.220 --rc genhtml_legend=1 00:23:00.220 --rc geninfo_all_blocks=1 00:23:00.220 --rc geninfo_unexecuted_blocks=1 00:23:00.220 00:23:00.220 ' 00:23:00.220 14:29:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.220 --rc genhtml_branch_coverage=1 00:23:00.220 --rc genhtml_function_coverage=1 00:23:00.220 --rc genhtml_legend=1 00:23:00.220 --rc geninfo_all_blocks=1 00:23:00.220 --rc geninfo_unexecuted_blocks=1 00:23:00.220 00:23:00.220 ' 00:23:00.220 14:29:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.220 --rc genhtml_branch_coverage=1 00:23:00.220 --rc genhtml_function_coverage=1 00:23:00.220 --rc genhtml_legend=1 00:23:00.220 --rc geninfo_all_blocks=1 00:23:00.220 --rc geninfo_unexecuted_blocks=1 00:23:00.220 00:23:00.220 ' 00:23:00.220 14:29:05 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:00.220 14:29:05 -- nvmf/common.sh@7 -- # uname -s 00:23:00.220 14:29:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.220 14:29:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.220 14:29:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.220 14:29:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.220 14:29:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.220 14:29:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.220 14:29:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.220 14:29:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.220 14:29:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.220 14:29:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.220 14:29:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:23:00.220 14:29:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:23:00.220 14:29:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.220 14:29:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.220 14:29:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:00.220 14:29:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.221 14:29:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.221 14:29:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.221 14:29:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.221 14:29:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 14:29:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 14:29:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 14:29:05 -- paths/export.sh@5 -- # export PATH 00:23:00.221 14:29:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.221 14:29:05 -- nvmf/common.sh@46 -- # : 0 00:23:00.221 14:29:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:00.221 14:29:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:00.221 14:29:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:00.221 14:29:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.221 14:29:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.221 14:29:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:00.221 14:29:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:00.221 14:29:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:00.221 14:29:05 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:00.221 14:29:05 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:00.221 14:29:05 -- host/digest.sh@16 -- # runtime=2 00:23:00.221 14:29:05 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:23:00.221 14:29:05 -- host/digest.sh@132 -- # nvmftestinit 00:23:00.221 14:29:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:00.221 14:29:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.221 14:29:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:00.221 14:29:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:00.221 14:29:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:00.221 14:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.221 14:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.221 14:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.221 14:29:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:00.221 14:29:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:00.221 14:29:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:00.221 14:29:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:00.221 14:29:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:00.221 14:29:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:00.221 14:29:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.221 14:29:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.221 14:29:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:00.221 14:29:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:00.221 14:29:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:00.221 14:29:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:00.221 14:29:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:00.221 14:29:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.221 14:29:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:00.221 14:29:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:00.221 14:29:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:00.221 14:29:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:00.221 14:29:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:00.221 14:29:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:00.221 Cannot find device "nvmf_tgt_br" 00:23:00.221 14:29:05 -- nvmf/common.sh@154 -- # true 00:23:00.221 14:29:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.221 Cannot find device "nvmf_tgt_br2" 00:23:00.221 14:29:05 -- nvmf/common.sh@155 -- # true 00:23:00.221 14:29:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:00.221 14:29:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:00.221 Cannot find device "nvmf_tgt_br" 00:23:00.221 14:29:05 -- nvmf/common.sh@157 -- # true 00:23:00.221 14:29:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:00.221 Cannot find device "nvmf_tgt_br2" 00:23:00.221 14:29:05 -- nvmf/common.sh@158 -- # true 00:23:00.221 14:29:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:00.221 14:29:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:00.221 14:29:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.221 14:29:05 -- nvmf/common.sh@161 -- # true 00:23:00.221 14:29:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.221 14:29:05 -- nvmf/common.sh@162 -- # true 00:23:00.221 14:29:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:00.479 14:29:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:00.479 14:29:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:00.479 14:29:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:00.479 14:29:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:00.479 14:29:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:00.479 14:29:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:00.479 14:29:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:00.479 14:29:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:00.479 14:29:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:00.479 14:29:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:00.479 14:29:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:00.480 14:29:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:00.480 14:29:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:00.480 14:29:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:00.480 14:29:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:00.480 14:29:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:00.480 14:29:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:00.480 14:29:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:00.480 14:29:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:00.480 14:29:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:00.480 14:29:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:00.480 14:29:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:00.480 14:29:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:00.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:00.480 00:23:00.480 --- 10.0.0.2 ping statistics --- 00:23:00.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.480 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:00.480 14:29:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:00.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:00.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:00.480 00:23:00.480 --- 10.0.0.3 ping statistics --- 00:23:00.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.480 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:00.480 14:29:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:00.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:00.480 00:23:00.480 --- 10.0.0.1 ping statistics --- 00:23:00.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.480 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:00.480 14:29:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.480 14:29:06 -- nvmf/common.sh@421 -- # return 0 00:23:00.480 14:29:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:00.480 14:29:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.480 14:29:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:00.480 14:29:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:00.480 14:29:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.480 14:29:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:00.480 14:29:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:00.480 14:29:06 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:00.480 14:29:06 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:23:00.480 14:29:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:00.480 14:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.480 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:23:00.480 ************************************ 00:23:00.480 START TEST nvmf_digest_clean 00:23:00.480 ************************************ 00:23:00.480 14:29:06 -- common/autotest_common.sh@1114 -- # run_digest 00:23:00.480 14:29:06 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:23:00.480 14:29:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:00.480 14:29:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.480 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:23:00.480 14:29:06 -- nvmf/common.sh@469 -- # nvmfpid=97378 00:23:00.480 14:29:06 -- nvmf/common.sh@470 -- # waitforlisten 97378 00:23:00.480 14:29:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:00.480 14:29:06 -- common/autotest_common.sh@829 -- # '[' -z 97378 ']' 00:23:00.480 14:29:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.480 14:29:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.480 14:29:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.480 14:29:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.480 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:23:00.480 [2024-12-05 14:29:06.114797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:00.480 [2024-12-05 14:29:06.115022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.738 [2024-12-05 14:29:06.249037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.738 [2024-12-05 14:29:06.304055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:00.738 [2024-12-05 14:29:06.304189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.738 [2024-12-05 14:29:06.304202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.738 [2024-12-05 14:29:06.304210] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.738 [2024-12-05 14:29:06.304234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.738 14:29:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.738 14:29:06 -- common/autotest_common.sh@862 -- # return 0 00:23:00.738 14:29:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:00.738 14:29:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.738 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:23:00.996 14:29:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.996 14:29:06 -- host/digest.sh@120 -- # common_target_config 00:23:00.996 14:29:06 -- host/digest.sh@43 -- # rpc_cmd 00:23:00.996 14:29:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.996 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:23:00.996 null0 00:23:00.996 [2024-12-05 14:29:06.525124] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.996 [2024-12-05 14:29:06.549228] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.996 14:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.996 14:29:06 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:23:00.996 14:29:06 -- host/digest.sh@77 -- # local rw bs qd 00:23:00.996 14:29:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:00.996 14:29:06 -- host/digest.sh@80 -- # rw=randread 00:23:00.996 14:29:06 -- host/digest.sh@80 -- # bs=4096 00:23:00.996 14:29:06 -- host/digest.sh@80 -- # qd=128 00:23:00.996 14:29:06 -- host/digest.sh@82 -- # bperfpid=97410 00:23:00.996 14:29:06 -- host/digest.sh@83 -- # waitforlisten 97410 /var/tmp/bperf.sock 00:23:00.996 14:29:06 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:00.996 14:29:06 -- common/autotest_common.sh@829 -- # '[' -z 97410 ']' 00:23:00.996 14:29:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:00.996 14:29:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.996 14:29:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:00.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:00.996 14:29:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.996 14:29:06 -- common/autotest_common.sh@10 -- # set +x 00:23:00.996 [2024-12-05 14:29:06.610159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:00.996 [2024-12-05 14:29:06.610436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97410 ] 00:23:01.254 [2024-12-05 14:29:06.752414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.254 [2024-12-05 14:29:06.837281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.189 14:29:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.190 14:29:07 -- common/autotest_common.sh@862 -- # return 0 00:23:02.190 14:29:07 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:02.190 14:29:07 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:02.190 14:29:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:02.448 14:29:07 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.448 14:29:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.706 nvme0n1 00:23:02.706 14:29:08 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:02.706 14:29:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:02.706 Running I/O for 2 seconds... 00:23:05.235 00:23:05.235 Latency(us) 00:23:05.235 [2024-12-05T14:29:10.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.235 [2024-12-05T14:29:10.883Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:05.235 nvme0n1 : 2.00 24344.62 95.10 0.00 0.00 5253.20 2308.65 18111.77 00:23:05.235 [2024-12-05T14:29:10.883Z] =================================================================================================================== 00:23:05.235 [2024-12-05T14:29:10.883Z] Total : 24344.62 95.10 0.00 0.00 5253.20 2308.65 18111.77 00:23:05.235 0 00:23:05.235 14:29:10 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:05.235 14:29:10 -- host/digest.sh@92 -- # get_accel_stats 00:23:05.235 14:29:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:05.235 14:29:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:05.235 | select(.opcode=="crc32c") 00:23:05.235 | "\(.module_name) \(.executed)"' 00:23:05.235 14:29:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:05.235 14:29:10 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:05.235 14:29:10 -- host/digest.sh@93 -- # exp_module=software 00:23:05.236 14:29:10 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:05.236 14:29:10 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:05.236 14:29:10 -- host/digest.sh@97 -- # killprocess 97410 00:23:05.236 14:29:10 -- common/autotest_common.sh@936 -- # '[' -z 97410 ']' 00:23:05.236 14:29:10 -- common/autotest_common.sh@940 -- # kill -0 97410 00:23:05.236 14:29:10 -- common/autotest_common.sh@941 -- # uname 00:23:05.236 14:29:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.236 14:29:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97410 00:23:05.236 14:29:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.236 14:29:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.236 killing process with pid 97410 00:23:05.236 14:29:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97410' 00:23:05.236 14:29:10 -- common/autotest_common.sh@955 -- # kill 97410 00:23:05.236 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.236 00:23:05.236 Latency(us) 00:23:05.236 [2024-12-05T14:29:10.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.236 [2024-12-05T14:29:10.884Z] =================================================================================================================== 00:23:05.236 [2024-12-05T14:29:10.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.236 14:29:10 -- common/autotest_common.sh@960 -- # wait 97410 00:23:05.494 14:29:10 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:23:05.494 14:29:10 -- host/digest.sh@77 -- # local rw bs qd 00:23:05.494 14:29:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:05.494 14:29:10 -- host/digest.sh@80 -- # rw=randread 00:23:05.494 14:29:10 -- host/digest.sh@80 -- # bs=131072 00:23:05.494 14:29:10 -- host/digest.sh@80 -- # qd=16 00:23:05.494 14:29:10 -- host/digest.sh@82 -- # bperfpid=97505 00:23:05.494 14:29:10 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:05.494 14:29:10 -- host/digest.sh@83 -- # waitforlisten 97505 /var/tmp/bperf.sock 00:23:05.494 14:29:10 -- common/autotest_common.sh@829 -- # '[' -z 97505 ']' 00:23:05.494 14:29:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:05.494 14:29:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:05.494 14:29:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:05.494 14:29:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.494 14:29:10 -- common/autotest_common.sh@10 -- # set +x 00:23:05.494 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:05.494 Zero copy mechanism will not be used. 00:23:05.494 [2024-12-05 14:29:10.965555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:05.494 [2024-12-05 14:29:10.965634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97505 ] 00:23:05.494 [2024-12-05 14:29:11.089838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.754 [2024-12-05 14:29:11.170273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.754 14:29:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.754 14:29:11 -- common/autotest_common.sh@862 -- # return 0 00:23:05.754 14:29:11 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:05.754 14:29:11 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:05.754 14:29:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:06.012 14:29:11 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.012 14:29:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.271 nvme0n1 00:23:06.271 14:29:11 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:06.271 14:29:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:06.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:06.530 Zero copy mechanism will not be used. 00:23:06.530 Running I/O for 2 seconds... 00:23:08.434 00:23:08.434 Latency(us) 00:23:08.434 [2024-12-05T14:29:14.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.434 [2024-12-05T14:29:14.082Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:08.434 nvme0n1 : 2.00 9094.49 1136.81 0.00 0.00 1756.61 688.87 10187.87 00:23:08.434 [2024-12-05T14:29:14.082Z] =================================================================================================================== 00:23:08.434 [2024-12-05T14:29:14.082Z] Total : 9094.49 1136.81 0.00 0.00 1756.61 688.87 10187.87 00:23:08.434 0 00:23:08.434 14:29:13 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:08.434 14:29:13 -- host/digest.sh@92 -- # get_accel_stats 00:23:08.434 14:29:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:08.434 14:29:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:08.434 | select(.opcode=="crc32c") 00:23:08.434 | "\(.module_name) \(.executed)"' 00:23:08.434 14:29:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:08.694 14:29:14 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:08.694 14:29:14 -- host/digest.sh@93 -- # exp_module=software 00:23:08.694 14:29:14 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:08.694 14:29:14 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:08.694 14:29:14 -- host/digest.sh@97 -- # killprocess 97505 00:23:08.694 14:29:14 -- common/autotest_common.sh@936 -- # '[' -z 97505 ']' 00:23:08.694 14:29:14 -- common/autotest_common.sh@940 -- # kill -0 97505 00:23:08.694 14:29:14 -- common/autotest_common.sh@941 -- # uname 00:23:08.694 14:29:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:08.694 14:29:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97505 00:23:08.694 14:29:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:08.694 14:29:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:08.694 killing process with pid 97505 00:23:08.694 14:29:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97505' 00:23:08.694 Received shutdown signal, test time was about 2.000000 seconds 00:23:08.694 00:23:08.694 Latency(us) 00:23:08.694 [2024-12-05T14:29:14.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.694 [2024-12-05T14:29:14.342Z] =================================================================================================================== 00:23:08.694 [2024-12-05T14:29:14.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.694 14:29:14 -- common/autotest_common.sh@955 -- # kill 97505 00:23:08.694 14:29:14 -- common/autotest_common.sh@960 -- # wait 97505 00:23:08.953 14:29:14 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:23:08.953 14:29:14 -- host/digest.sh@77 -- # local rw bs qd 00:23:08.953 14:29:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:08.953 14:29:14 -- host/digest.sh@80 -- # rw=randwrite 00:23:08.953 14:29:14 -- host/digest.sh@80 -- # bs=4096 00:23:08.953 14:29:14 -- host/digest.sh@80 -- # qd=128 00:23:08.953 14:29:14 -- host/digest.sh@82 -- # bperfpid=97576 00:23:08.953 14:29:14 -- host/digest.sh@83 -- # waitforlisten 97576 /var/tmp/bperf.sock 00:23:08.954 14:29:14 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:08.954 14:29:14 -- common/autotest_common.sh@829 -- # '[' -z 97576 ']' 00:23:08.954 14:29:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.954 14:29:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.954 14:29:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.954 14:29:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.954 14:29:14 -- common/autotest_common.sh@10 -- # set +x 00:23:08.954 [2024-12-05 14:29:14.534047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.954 [2024-12-05 14:29:14.534150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97576 ] 00:23:09.213 [2024-12-05 14:29:14.666122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.213 [2024-12-05 14:29:14.743824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.805 14:29:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.805 14:29:15 -- common/autotest_common.sh@862 -- # return 0 00:23:09.805 14:29:15 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:09.805 14:29:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:09.805 14:29:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:10.371 14:29:15 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.371 14:29:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.630 nvme0n1 00:23:10.630 14:29:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:10.630 14:29:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.630 Running I/O for 2 seconds... 00:23:13.164 00:23:13.164 Latency(us) 00:23:13.164 [2024-12-05T14:29:18.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.164 [2024-12-05T14:29:18.812Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:13.164 nvme0n1 : 2.00 28812.64 112.55 0.00 0.00 4437.34 1869.27 8877.15 00:23:13.164 [2024-12-05T14:29:18.812Z] =================================================================================================================== 00:23:13.164 [2024-12-05T14:29:18.812Z] Total : 28812.64 112.55 0.00 0.00 4437.34 1869.27 8877.15 00:23:13.164 0 00:23:13.164 14:29:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:13.164 14:29:18 -- host/digest.sh@92 -- # get_accel_stats 00:23:13.164 14:29:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:13.164 14:29:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:13.164 14:29:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:13.164 | select(.opcode=="crc32c") 00:23:13.164 | "\(.module_name) \(.executed)"' 00:23:13.164 14:29:18 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:13.164 14:29:18 -- host/digest.sh@93 -- # exp_module=software 00:23:13.164 14:29:18 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:13.164 14:29:18 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:13.164 14:29:18 -- host/digest.sh@97 -- # killprocess 97576 00:23:13.164 14:29:18 -- common/autotest_common.sh@936 -- # '[' -z 97576 ']' 00:23:13.164 14:29:18 -- common/autotest_common.sh@940 -- # kill -0 97576 00:23:13.164 14:29:18 -- common/autotest_common.sh@941 -- # uname 00:23:13.164 14:29:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.164 14:29:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97576 00:23:13.164 killing process with pid 97576 00:23:13.164 Received shutdown signal, test time was about 2.000000 seconds 00:23:13.164 00:23:13.164 Latency(us) 00:23:13.164 [2024-12-05T14:29:18.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.164 [2024-12-05T14:29:18.812Z] =================================================================================================================== 00:23:13.164 [2024-12-05T14:29:18.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.164 14:29:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:13.164 14:29:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:13.164 14:29:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97576' 00:23:13.164 14:29:18 -- common/autotest_common.sh@955 -- # kill 97576 00:23:13.164 14:29:18 -- common/autotest_common.sh@960 -- # wait 97576 00:23:13.423 14:29:18 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:23:13.423 14:29:18 -- host/digest.sh@77 -- # local rw bs qd 00:23:13.423 14:29:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:13.423 14:29:18 -- host/digest.sh@80 -- # rw=randwrite 00:23:13.423 14:29:18 -- host/digest.sh@80 -- # bs=131072 00:23:13.423 14:29:18 -- host/digest.sh@80 -- # qd=16 00:23:13.423 14:29:18 -- host/digest.sh@82 -- # bperfpid=97663 00:23:13.423 14:29:18 -- host/digest.sh@83 -- # waitforlisten 97663 /var/tmp/bperf.sock 00:23:13.423 14:29:18 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:13.423 14:29:18 -- common/autotest_common.sh@829 -- # '[' -z 97663 ']' 00:23:13.423 14:29:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:13.423 14:29:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.423 14:29:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:13.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:13.423 14:29:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.423 14:29:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.423 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:13.423 Zero copy mechanism will not be used. 00:23:13.423 [2024-12-05 14:29:18.865298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:13.423 [2024-12-05 14:29:18.865405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97663 ] 00:23:13.424 [2024-12-05 14:29:19.002313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.424 [2024-12-05 14:29:19.067559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.360 14:29:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.360 14:29:19 -- common/autotest_common.sh@862 -- # return 0 00:23:14.360 14:29:19 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:14.360 14:29:19 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:14.360 14:29:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:14.619 14:29:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:14.619 14:29:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:14.877 nvme0n1 00:23:14.877 14:29:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:14.877 14:29:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:15.136 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:15.136 Zero copy mechanism will not be used. 00:23:15.136 Running I/O for 2 seconds... 00:23:17.040 00:23:17.040 Latency(us) 00:23:17.040 [2024-12-05T14:29:22.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.040 [2024-12-05T14:29:22.688Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:17.040 nvme0n1 : 2.00 7954.57 994.32 0.00 0.00 2007.32 1690.53 4647.10 00:23:17.040 [2024-12-05T14:29:22.688Z] =================================================================================================================== 00:23:17.040 [2024-12-05T14:29:22.688Z] Total : 7954.57 994.32 0.00 0.00 2007.32 1690.53 4647.10 00:23:17.040 0 00:23:17.040 14:29:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:17.040 14:29:22 -- host/digest.sh@92 -- # get_accel_stats 00:23:17.040 14:29:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:17.040 14:29:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:17.040 | select(.opcode=="crc32c") 00:23:17.040 | "\(.module_name) \(.executed)"' 00:23:17.040 14:29:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:17.298 14:29:22 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:17.298 14:29:22 -- host/digest.sh@93 -- # exp_module=software 00:23:17.298 14:29:22 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:17.298 14:29:22 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:17.298 14:29:22 -- host/digest.sh@97 -- # killprocess 97663 00:23:17.298 14:29:22 -- common/autotest_common.sh@936 -- # '[' -z 97663 ']' 00:23:17.298 14:29:22 -- common/autotest_common.sh@940 -- # kill -0 97663 00:23:17.298 14:29:22 -- common/autotest_common.sh@941 -- # uname 00:23:17.298 14:29:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.298 14:29:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97663 00:23:17.298 14:29:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:17.298 killing process with pid 97663 00:23:17.298 Received shutdown signal, test time was about 2.000000 seconds 00:23:17.298 00:23:17.298 Latency(us) 00:23:17.298 [2024-12-05T14:29:22.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.298 [2024-12-05T14:29:22.946Z] =================================================================================================================== 00:23:17.298 [2024-12-05T14:29:22.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.298 14:29:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:17.298 14:29:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97663' 00:23:17.298 14:29:22 -- common/autotest_common.sh@955 -- # kill 97663 00:23:17.298 14:29:22 -- common/autotest_common.sh@960 -- # wait 97663 00:23:17.556 14:29:23 -- host/digest.sh@126 -- # killprocess 97378 00:23:17.557 14:29:23 -- common/autotest_common.sh@936 -- # '[' -z 97378 ']' 00:23:17.557 14:29:23 -- common/autotest_common.sh@940 -- # kill -0 97378 00:23:17.557 14:29:23 -- common/autotest_common.sh@941 -- # uname 00:23:17.557 14:29:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.557 14:29:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97378 00:23:17.557 killing process with pid 97378 00:23:17.557 14:29:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:17.557 14:29:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:17.557 14:29:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97378' 00:23:17.557 14:29:23 -- common/autotest_common.sh@955 -- # kill 97378 00:23:17.557 14:29:23 -- common/autotest_common.sh@960 -- # wait 97378 00:23:17.815 ************************************ 00:23:17.815 END TEST nvmf_digest_clean 00:23:17.815 ************************************ 00:23:17.815 00:23:17.815 real 0m17.267s 00:23:17.815 user 0m31.615s 00:23:17.815 sys 0m5.585s 00:23:17.815 14:29:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:17.815 14:29:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.815 14:29:23 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:17.815 14:29:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:17.815 14:29:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:17.815 14:29:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.815 ************************************ 00:23:17.815 START TEST nvmf_digest_error 00:23:17.815 ************************************ 00:23:17.815 14:29:23 -- common/autotest_common.sh@1114 -- # run_digest_error 00:23:17.815 14:29:23 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:17.815 14:29:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:17.815 14:29:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.815 14:29:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.815 14:29:23 -- nvmf/common.sh@469 -- # nvmfpid=97782 00:23:17.815 14:29:23 -- nvmf/common.sh@470 -- # waitforlisten 97782 00:23:17.815 14:29:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:17.815 14:29:23 -- common/autotest_common.sh@829 -- # '[' -z 97782 ']' 00:23:17.815 14:29:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.815 14:29:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.815 14:29:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.815 14:29:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.815 14:29:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.815 [2024-12-05 14:29:23.453935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:17.815 [2024-12-05 14:29:23.454044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.074 [2024-12-05 14:29:23.588503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.074 [2024-12-05 14:29:23.644705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:18.074 [2024-12-05 14:29:23.644872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.074 [2024-12-05 14:29:23.644885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.074 [2024-12-05 14:29:23.644894] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.074 [2024-12-05 14:29:23.644923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.011 14:29:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.011 14:29:24 -- common/autotest_common.sh@862 -- # return 0 00:23:19.011 14:29:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:19.011 14:29:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.011 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:23:19.011 14:29:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.011 14:29:24 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:19.011 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.011 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:23:19.011 [2024-12-05 14:29:24.421418] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:19.011 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.011 14:29:24 -- host/digest.sh@104 -- # common_target_config 00:23:19.011 14:29:24 -- host/digest.sh@43 -- # rpc_cmd 00:23:19.011 14:29:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.011 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:23:19.011 null0 00:23:19.011 [2024-12-05 14:29:24.528245] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.011 [2024-12-05 14:29:24.552325] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.011 14:29:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.011 14:29:24 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:19.011 14:29:24 -- host/digest.sh@54 -- # local rw bs qd 00:23:19.011 14:29:24 -- host/digest.sh@56 -- # rw=randread 00:23:19.011 14:29:24 -- host/digest.sh@56 -- # bs=4096 00:23:19.011 14:29:24 -- host/digest.sh@56 -- # qd=128 00:23:19.011 14:29:24 -- host/digest.sh@58 -- # bperfpid=97826 00:23:19.011 14:29:24 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:19.011 14:29:24 -- host/digest.sh@60 -- # waitforlisten 97826 /var/tmp/bperf.sock 00:23:19.011 14:29:24 -- common/autotest_common.sh@829 -- # '[' -z 97826 ']' 00:23:19.011 14:29:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:19.011 14:29:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:19.011 14:29:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:19.011 14:29:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.011 14:29:24 -- common/autotest_common.sh@10 -- # set +x 00:23:19.011 [2024-12-05 14:29:24.615834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:19.011 [2024-12-05 14:29:24.615937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97826 ] 00:23:19.269 [2024-12-05 14:29:24.758784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.269 [2024-12-05 14:29:24.826046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.202 14:29:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.202 14:29:25 -- common/autotest_common.sh@862 -- # return 0 00:23:20.202 14:29:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.202 14:29:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.202 14:29:25 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:20.202 14:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.202 14:29:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.202 14:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.202 14:29:25 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.202 14:29:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.769 nvme0n1 00:23:20.769 14:29:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:20.769 14:29:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.769 14:29:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.769 14:29:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.769 14:29:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:20.769 14:29:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:20.769 Running I/O for 2 seconds... 00:23:20.769 [2024-12-05 14:29:26.307336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.307405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.307425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.321486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.321545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.321563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.335731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.335786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.335821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.350857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.350910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.350927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.365473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.365527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.365544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.378180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.378225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.378236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.391453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.391487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.391498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.769 [2024-12-05 14:29:26.403495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:20.769 [2024-12-05 14:29:26.403527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.769 [2024-12-05 14:29:26.403539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.417839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.417890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.417902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.426391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.426423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.426435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.439606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.439652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.439664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.452168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.452214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.452226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.464526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.464558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.464570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.475175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.475208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.475219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.486290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.486323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.486334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.499232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.499266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.499278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.506931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.506964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.506976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.518957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.518990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.519001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.529894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.529926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.529939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.539914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.539946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.539980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.550048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.550081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.550093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.558916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.558947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.558958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.569092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.569148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.580150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.580194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.580206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.591350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.591381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.591392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.600362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.600396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.600407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.612682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.612715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.612726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.625667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.625699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.625710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.637495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.637528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.637539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.650332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.650365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.650377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.658562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.658594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.658606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-12-05 14:29:26.670772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.029 [2024-12-05 14:29:26.670814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-12-05 14:29:26.670827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.683908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.683939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.683951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.695678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.695711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.695722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.705292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.705325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.705337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.714600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.714633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.714645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.723637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.723680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.736375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.736407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.736419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.748666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.748698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.748709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.761096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.761128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.761139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.773184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.773216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.773228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.783872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.783904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.783915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.792448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.792480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.792492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.803353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.803387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.803398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.813759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.813792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.813825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.822198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.822229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.834785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.834826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.834840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.845800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.845840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.845852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.854754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.854786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.854798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.864300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.864333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.864344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.873601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.873633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.873644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.882782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.289 [2024-12-05 14:29:26.882824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.289 [2024-12-05 14:29:26.882842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.289 [2024-12-05 14:29:26.892287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.290 [2024-12-05 14:29:26.892336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.290 [2024-12-05 14:29:26.892349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.290 [2024-12-05 14:29:26.901923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.290 [2024-12-05 14:29:26.901956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.290 [2024-12-05 14:29:26.901967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.290 [2024-12-05 14:29:26.910986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.290 [2024-12-05 14:29:26.911018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.290 [2024-12-05 14:29:26.911030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.290 [2024-12-05 14:29:26.920673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.290 [2024-12-05 14:29:26.920707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.290 [2024-12-05 14:29:26.920718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.290 [2024-12-05 14:29:26.930595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.290 [2024-12-05 14:29:26.930629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.290 [2024-12-05 14:29:26.930656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:26.943447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:26.943480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:26.943492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:26.955578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:26.955611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:26.955623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:26.967459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:26.967491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:26.967502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:26.980230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:26.980274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:26.980295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:26.990695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:26.990727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:26.990737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:26.999842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:26.999874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:26.999885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:27.010663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:27.010697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:27.010708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:27.019560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:27.019592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:27.019604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:27.031013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:27.031057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:27.031069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:27.041858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:27.041890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:27.041901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:27.051727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:27.051759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.549 [2024-12-05 14:29:27.051771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.549 [2024-12-05 14:29:27.061159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.549 [2024-12-05 14:29:27.061191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.061203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.069452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.069484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.069495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.079618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.079650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.079662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.088441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.088473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.088484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.097509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.097541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.097553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.106451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.106483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.106494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.118248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.118280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.118291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.128642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.128675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.128687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.137510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.137543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.137554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.148096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.148141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.148153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.156930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.156962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.156973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.166719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.166764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.166776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.177584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.177615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.177626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.550 [2024-12-05 14:29:27.187950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.550 [2024-12-05 14:29:27.187990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.550 [2024-12-05 14:29:27.188002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.197475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.197508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.197536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.208605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.208651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.208662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.220338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.220372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.220384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.230386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.230418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.230429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.239047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.239080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.239091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.248869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.248912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.248923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.260987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.261020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.261032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.271842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.271874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.271885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.283713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.283745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.283757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.293378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.293411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.293423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.303275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.303308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.810 [2024-12-05 14:29:27.303319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.810 [2024-12-05 14:29:27.313197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.810 [2024-12-05 14:29:27.313229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.313241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.324855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.324887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.324898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.336057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.336088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.336099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.347372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.347404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.347416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.357140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.357171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.357183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.369236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.369268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.369279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.381324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.381357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.381368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.393278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.393310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.393322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.406224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.406256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.406267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.414853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.414896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.414907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.424797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.424839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.424851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.434581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.434616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.434628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.444507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.444541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.444553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.811 [2024-12-05 14:29:27.454278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:21.811 [2024-12-05 14:29:27.454311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.811 [2024-12-05 14:29:27.454323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.463574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.463607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.463619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.473147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.473191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.482509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.482543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.482556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.491926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.491977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.491994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.502677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.502721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.502732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.512893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.512938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.512949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.523278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.523310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.523321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.533304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.071 [2024-12-05 14:29:27.533337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.071 [2024-12-05 14:29:27.533348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.071 [2024-12-05 14:29:27.542761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.542794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.542818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.553136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.553181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.553192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.563980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.564023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.564035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.574126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.574159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.583293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.583337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.583349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.593085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.593129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.593141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.602979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.603023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.603035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.614611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.614644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.614655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.624107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.624152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.624164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.635263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.635306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.635317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.646118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.646152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.655097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.655128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.655140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.666712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.666744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.666756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.677459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.677503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.690218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.690251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.690263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.702230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.702263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.702275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.072 [2024-12-05 14:29:27.714335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.072 [2024-12-05 14:29:27.714369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.072 [2024-12-05 14:29:27.714380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.726838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.726869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.332 [2024-12-05 14:29:27.726881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.735427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.735460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.332 [2024-12-05 14:29:27.735472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.747677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.747712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.332 [2024-12-05 14:29:27.747724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.758702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.758734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.332 [2024-12-05 14:29:27.758746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.770739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.770771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.332 [2024-12-05 14:29:27.770784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.783411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.783442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.332 [2024-12-05 14:29:27.783454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.332 [2024-12-05 14:29:27.794871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.332 [2024-12-05 14:29:27.794903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.794914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.804895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.804926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.804937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.817278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.817311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.817322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.825800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.825842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.825853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.837932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.837964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.837975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.850105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.850138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.850150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.859713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.859746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.859757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.869881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.869912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.869923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.879692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.879725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.879735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.891644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.891677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.891688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.904634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.904666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.904678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.915749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.915781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.915792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.925044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.925075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.925087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.937703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.937737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.937750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.949787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.949831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.949843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.962363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.962396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.962407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.333 [2024-12-05 14:29:27.974508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.333 [2024-12-05 14:29:27.974541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.333 [2024-12-05 14:29:27.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:27.983021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.593 [2024-12-05 14:29:27.983053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.593 [2024-12-05 14:29:27.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:27.994849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.593 [2024-12-05 14:29:27.994882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.593 [2024-12-05 14:29:27.994894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:28.007452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.593 [2024-12-05 14:29:28.007485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.593 [2024-12-05 14:29:28.007497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:28.019291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.593 [2024-12-05 14:29:28.019325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.593 [2024-12-05 14:29:28.019336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:28.031324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.593 [2024-12-05 14:29:28.031356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.593 [2024-12-05 14:29:28.031367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:28.043161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.593 [2024-12-05 14:29:28.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.593 [2024-12-05 14:29:28.043205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.593 [2024-12-05 14:29:28.051520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.051555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.051567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.064534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.064566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.064578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.074310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.074342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.074354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.086000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.086032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.086044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.099180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.099213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.099225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.111067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.111100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.111112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.124099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.124145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.134448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.134480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.134492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.144351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.144383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.144395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.154201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.154234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.154245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.163266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.163299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.172517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.172550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.172562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.181860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.181892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.181903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.191060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.191092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.191104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.200511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.200543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.200555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.210756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.210812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.210830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.219549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.219591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.219603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.594 [2024-12-05 14:29:28.232575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.594 [2024-12-05 14:29:28.232607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.594 [2024-12-05 14:29:28.232619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.853 [2024-12-05 14:29:28.247334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.853 [2024-12-05 14:29:28.247365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.853 [2024-12-05 14:29:28.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.853 [2024-12-05 14:29:28.259412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.853 [2024-12-05 14:29:28.259445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.853 [2024-12-05 14:29:28.259457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.853 [2024-12-05 14:29:28.272499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.853 [2024-12-05 14:29:28.272531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.853 [2024-12-05 14:29:28.272543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.853 [2024-12-05 14:29:28.283771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.853 [2024-12-05 14:29:28.283813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.853 [2024-12-05 14:29:28.283826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.853 [2024-12-05 14:29:28.292101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19778d0) 00:23:22.853 [2024-12-05 14:29:28.292133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.853 [2024-12-05 14:29:28.292144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.853 00:23:22.853 Latency(us) 00:23:22.853 [2024-12-05T14:29:28.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.853 [2024-12-05T14:29:28.501Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:22.853 nvme0n1 : 2.01 23404.79 91.42 0.00 0.00 5463.91 2308.65 16205.27 00:23:22.853 [2024-12-05T14:29:28.501Z] =================================================================================================================== 00:23:22.853 [2024-12-05T14:29:28.501Z] Total : 23404.79 91.42 0.00 0.00 5463.91 2308.65 16205.27 00:23:22.853 0 00:23:22.853 14:29:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:22.853 14:29:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:22.853 14:29:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:22.853 | .driver_specific 00:23:22.853 | .nvme_error 00:23:22.853 | .status_code 00:23:22.853 | .command_transient_transport_error' 00:23:22.853 14:29:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:23.113 14:29:28 -- host/digest.sh@71 -- # (( 184 > 0 )) 00:23:23.113 14:29:28 -- host/digest.sh@73 -- # killprocess 97826 00:23:23.113 14:29:28 -- common/autotest_common.sh@936 -- # '[' -z 97826 ']' 00:23:23.113 14:29:28 -- common/autotest_common.sh@940 -- # kill -0 97826 00:23:23.113 14:29:28 -- common/autotest_common.sh@941 -- # uname 00:23:23.113 14:29:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.113 14:29:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97826 00:23:23.113 14:29:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:23.113 14:29:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:23.113 killing process with pid 97826 00:23:23.113 14:29:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97826' 00:23:23.113 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.113 00:23:23.113 Latency(us) 00:23:23.113 [2024-12-05T14:29:28.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.113 [2024-12-05T14:29:28.761Z] =================================================================================================================== 00:23:23.113 [2024-12-05T14:29:28.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.113 14:29:28 -- common/autotest_common.sh@955 -- # kill 97826 00:23:23.113 14:29:28 -- common/autotest_common.sh@960 -- # wait 97826 00:23:23.372 14:29:28 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:23.372 14:29:28 -- host/digest.sh@54 -- # local rw bs qd 00:23:23.372 14:29:28 -- host/digest.sh@56 -- # rw=randread 00:23:23.372 14:29:28 -- host/digest.sh@56 -- # bs=131072 00:23:23.372 14:29:28 -- host/digest.sh@56 -- # qd=16 00:23:23.372 14:29:28 -- host/digest.sh@58 -- # bperfpid=97918 00:23:23.372 14:29:28 -- host/digest.sh@60 -- # waitforlisten 97918 /var/tmp/bperf.sock 00:23:23.372 14:29:28 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:23.372 14:29:28 -- common/autotest_common.sh@829 -- # '[' -z 97918 ']' 00:23:23.372 14:29:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:23.372 14:29:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.372 14:29:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:23.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:23.372 14:29:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.372 14:29:28 -- common/autotest_common.sh@10 -- # set +x 00:23:23.372 [2024-12-05 14:29:28.933482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:23.372 [2024-12-05 14:29:28.933596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97918 ] 00:23:23.372 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:23.372 Zero copy mechanism will not be used. 00:23:23.631 [2024-12-05 14:29:29.073378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.631 [2024-12-05 14:29:29.143671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.199 14:29:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.199 14:29:29 -- common/autotest_common.sh@862 -- # return 0 00:23:24.199 14:29:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:24.199 14:29:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:24.459 14:29:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:24.459 14:29:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.459 14:29:30 -- common/autotest_common.sh@10 -- # set +x 00:23:24.459 14:29:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.459 14:29:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:24.459 14:29:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:24.719 nvme0n1 00:23:24.719 14:29:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:24.719 14:29:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.719 14:29:30 -- common/autotest_common.sh@10 -- # set +x 00:23:24.980 14:29:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.980 14:29:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:24.980 14:29:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:24.980 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:24.980 Zero copy mechanism will not be used. 00:23:24.980 Running I/O for 2 seconds... 00:23:24.980 [2024-12-05 14:29:30.463130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.463184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.463203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.467400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.467435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.467459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.471418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.471453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.471475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.474874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.474907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.474931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.478565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.478598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.478621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.482681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.482714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.482737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.486488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.486522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.486545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.490244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.490277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.490299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.493619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.493651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.493675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.497843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.497876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.497898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.501428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.501461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.501484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.505417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.505451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.505462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.509045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.509079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.512198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.512232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.512252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.516357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.516389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.516413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.520694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.520750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.524788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.524829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.524841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.528331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.528365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.528376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.532778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.532820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.532833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.536285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.536318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.536340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.980 [2024-12-05 14:29:30.539976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.980 [2024-12-05 14:29:30.540022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.980 [2024-12-05 14:29:30.540033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.543093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.543126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.543149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.546797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.546839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.546851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.550221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.550255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.550276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.553869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.553902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.553914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.557405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.557438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.557449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.560930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.560974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.560996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.564655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.564688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.568571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.568617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.568638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.571353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.571385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.571396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.575740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.575784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.575813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.579582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.579625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.579648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.584211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.584244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.584265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.587562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.587595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.587606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.591292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.591325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.591349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.594963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.594997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.595019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.598242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.598276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.598300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.602140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.602174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.602195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.605596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.605629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.608797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.608839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.608850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.612481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.612515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.612538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.616789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.616831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.616843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.620119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.620181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.620201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.981 [2024-12-05 14:29:30.624122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:24.981 [2024-12-05 14:29:30.624158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.981 [2024-12-05 14:29:30.624170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.627346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.627379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.627403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.631380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.631413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.634877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.634910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.634933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.637979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.638013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.642010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.642043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.642065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.645689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.645722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.645745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.650083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.650117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.650139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.653564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.653597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.653618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.242 [2024-12-05 14:29:30.657484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.242 [2024-12-05 14:29:30.657517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.242 [2024-12-05 14:29:30.657529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.661361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.661392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.661417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.664282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.664314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.664336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.667840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.667871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.667882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.671606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.671639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.671662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.675227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.675281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.679301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.679335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.682906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.682940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.682962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.686977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.687009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.687032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.690254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.690287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.690307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.693909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.693941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.693964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.697393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.697427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.697450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.700625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.700658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.700679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.704557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.704591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.704613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.707977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.708011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.708024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.711288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.711320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.711342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.715128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.715162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.715183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.718856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.718889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.718911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.722231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.722264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.722288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.725440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.725473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.725485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.728961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.729005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.729017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.732833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.732867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.732878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.736664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.736709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.736730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.741022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.741066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.741087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.743849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.743880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.743903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.747557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.747591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.747615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.751363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.751397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.751408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.754279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.754312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.754336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.758319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.758352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.758374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.762025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.762059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.762081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.765815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.765846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.765868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.769833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.769866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.769877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.773208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.773240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.773264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.776348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.776380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.776406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.779510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.779543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.779565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.783391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.783424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.783444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.787492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.787537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.791306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.243 [2024-12-05 14:29:30.791338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.243 [2024-12-05 14:29:30.791360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.243 [2024-12-05 14:29:30.794457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.794491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.794515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.798424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.798457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.798469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.801409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.801442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.801466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.805003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.805050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.805070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.809533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.809567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.809589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.812904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.812950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.812969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.816857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.816890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.816913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.819692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.819724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.819743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.823572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.823606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.823629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.827320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.827353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.827376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.830151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.830197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.830214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.833710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.833743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.833768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.837227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.837259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.841238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.841270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.841292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.844799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.844841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.844853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.848125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.848157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.848177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.851895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.851928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.851950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.855694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.855728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.855748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.859240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.859274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.859295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.862476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.862509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.865579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.865613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.865636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.870073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.870118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.870138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.873424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.873457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.873468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.876586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.876619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.876642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.880419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.880453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.880474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.244 [2024-12-05 14:29:30.884237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.244 [2024-12-05 14:29:30.884284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.244 [2024-12-05 14:29:30.884295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.512 [2024-12-05 14:29:30.887787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.512 [2024-12-05 14:29:30.887830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.512 [2024-12-05 14:29:30.887849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.891413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.891446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.891471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.895361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.895394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.895405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.899011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.899044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.899068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.902875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.902918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.906095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.906140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.906151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.909247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.909280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.909303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.912691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.912724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.912746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.916540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.916572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.916596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.919491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.919523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.919548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.924208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.924242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.924262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.927525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.927557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.927579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.931467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.931500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.931521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.935158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.935191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.935202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.939169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.939199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.939210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.943354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.943385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.943396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.946744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.946776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.946798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.950692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.950723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.950746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.954843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.954874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.954896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.958468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.958499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.958521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.962742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.962774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.962798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.966059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.966092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.969494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.969526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.969548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.973578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.973610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.973633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.976954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.976987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.977007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.980409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.980442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.980453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.984535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.984566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.513 [2024-12-05 14:29:30.984590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.513 [2024-12-05 14:29:30.987830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.513 [2024-12-05 14:29:30.987862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:30.987885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:30.992158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:30.992191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:30.992203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:30.995976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:30.996010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:30.996031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:30.998771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:30.998825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:30.998845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.003141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.003174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.003185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.007475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.007508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.007531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.011085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.011118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.011129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.014011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.014056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.014076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.017557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.017590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.017612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.021823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.021866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.021877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.025584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.025629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.025640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.029401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.029446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.029467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.033389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.033422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.033433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.037078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.037112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.037124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.041372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.041406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.041417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.046021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.046066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.046077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.049799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.049855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.049874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.053777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.053827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.053843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.057357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.057391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.057402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.060861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.060892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.060903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.064464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.064495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.064507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.067362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.067395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.067406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.071182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.071214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.071226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.075742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.075788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.075823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.079371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.079404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.079426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.083098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.083131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.083152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.087167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.087200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.514 [2024-12-05 14:29:31.087222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.514 [2024-12-05 14:29:31.091293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.514 [2024-12-05 14:29:31.091324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.091336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.095283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.095314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.095326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.098559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.098592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.098604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.102267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.102310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.102322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.105893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.105936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.105958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.109460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.109492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.109514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.113181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.113213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.113225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.117194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.117238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.117259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.120948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.120980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.120991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.125369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.125411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.125434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.129328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.129373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.129395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.133486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.133518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.133529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.136774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.136819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.136840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.141134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.141167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.141179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.144918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.144961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.144983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.148338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.148383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.148404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.515 [2024-12-05 14:29:31.151838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.515 [2024-12-05 14:29:31.151900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.515 [2024-12-05 14:29:31.151920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.156532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.156601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.156638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.159754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.159799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.159829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.163876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.163920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.163932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.167715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.167758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.167770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.172027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.172086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.176790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.176841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.176854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.180785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.180826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.180839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.185129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.185158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.185170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.189197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.189239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.189250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.193153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.193184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.193195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.196565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.196595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.196606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.200546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.200576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.200586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.203983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.204054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.207906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.207948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.207967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.211345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.211375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.211386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.214959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.215001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.215012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.219080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.219109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.219120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.223044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.223087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.223098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.225945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.225975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.225986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.230070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.230101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.230112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.234111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.234140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.234151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.237561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.237590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.237601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.240762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.240791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.240815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.796 [2024-12-05 14:29:31.244339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.796 [2024-12-05 14:29:31.244368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.796 [2024-12-05 14:29:31.244379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.248357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.248386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.248397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.251579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.251608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.251619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.255392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.255422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.255433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.258638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.258669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.258680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.262507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.262537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.262548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.265793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.265844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.269134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.269164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.269175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.273092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.273121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.273132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.276661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.276705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.276716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.280631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.280673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.280684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.284238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.284271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.284282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.289180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.289223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.289234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.292590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.292621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.292631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.295989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.296019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.296031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.299709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.299739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.299750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.303502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.303531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.303541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.306978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.307007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.307018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.310863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.310894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.310905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.314370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.314400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.314411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.317978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.318009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.318020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.321584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.321614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.321625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.325188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.325218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.325228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.328334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.328363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.328374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.332266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.332297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.332318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.335824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.335865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.335876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.339215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.339245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.339255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.342388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.797 [2024-12-05 14:29:31.342418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.797 [2024-12-05 14:29:31.342429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.797 [2024-12-05 14:29:31.346370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.346401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.346411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.349749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.349779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.349790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.353503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.353532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.353543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.356343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.356373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.356384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.360671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.360702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.360713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.364398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.364426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.364437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.367760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.367789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.367800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.371582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.371611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.371622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.375623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.375654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.375665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.379035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.379064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.379075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.382651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.382681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.382692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.385668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.385698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.385709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.389202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.389232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.389243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.393362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.393393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.393404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.397425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.397456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.397466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.401211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.401241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.401252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.404908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.404937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.404948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.407913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.407942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.407953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.411391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.411421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.411432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.414883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.414925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.414936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.418382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.418412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.418423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.422351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.422380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.422391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.426733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.426763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.426776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.798 [2024-12-05 14:29:31.430350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:25.798 [2024-12-05 14:29:31.430382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.798 [2024-12-05 14:29:31.430394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.434110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.434153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.434165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.438563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.438596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.438609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.441914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.441943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.441953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.445117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.445146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.445157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.448920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.448950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.448962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.453204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.453237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.453249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.457833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.457906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.457919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.462018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.462059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.462070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.465655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.465684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.465695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.469854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.469881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.469893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.473623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.473653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.473665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.477649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.477677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.477688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.481328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.481357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.481367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.485392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.485423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.485434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.488325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.488355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.488366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.492234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.492277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.492315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.495065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.495106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.495117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.498887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.498929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.498940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.502313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.502343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.502353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.505928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.505957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.071 [2024-12-05 14:29:31.505968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.071 [2024-12-05 14:29:31.509693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.071 [2024-12-05 14:29:31.509723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.513572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.513602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.513613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.517194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.517224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.517235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.520650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.520680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.520691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.524994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.525025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.525036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.529088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.529118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.529130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.532941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.532969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.532981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.537440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.537482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.537493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.541225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.541255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.541266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.545322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.545352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.545364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.548650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.548680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.548691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.552216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.552246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.552256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.556655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.556684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.556698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.559517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.559545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.559556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.562938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.562967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.562978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.567003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.567032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.567043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.570829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.570858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.570868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.575029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.575059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.575069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.579110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.579140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.579150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.583059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.583090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.583101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.586388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.586419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.586429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.590315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.590344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.590356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.593403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.593433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.593444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.597583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.597612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.597623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.600942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.600969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.600981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.605077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.605106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.605116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.608746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.072 [2024-12-05 14:29:31.608776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.072 [2024-12-05 14:29:31.608787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.072 [2024-12-05 14:29:31.612313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.612342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.612353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.615432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.615460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.615472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.619394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.619423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.619434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.623321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.623350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.623361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.626857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.626886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.626897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.630651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.630682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.630693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.634445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.634474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.634485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.637929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.637959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.637970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.641447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.641477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.641489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.645203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.645231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.645243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.648894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.648923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.648934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.652394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.652423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.652434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.656185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.656228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.656239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.659784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.659829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.659842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.662941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.662970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.662981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.666341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.666371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.666382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.670771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.670812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.670824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.674710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.674738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.674749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.678252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.678282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.678293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.681953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.681993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.682004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.685993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.686034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.686045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.689669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.689697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.689708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.693381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.693410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.693420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.697766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.697827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.701427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.701457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.701468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.705228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.705259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.705271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.709226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.073 [2024-12-05 14:29:31.709265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.073 [2024-12-05 14:29:31.713413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.073 [2024-12-05 14:29:31.713444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.074 [2024-12-05 14:29:31.713455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.333 [2024-12-05 14:29:31.717893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.333 [2024-12-05 14:29:31.717934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.333 [2024-12-05 14:29:31.717945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.333 [2024-12-05 14:29:31.721854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.333 [2024-12-05 14:29:31.721921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.333 [2024-12-05 14:29:31.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.333 [2024-12-05 14:29:31.726124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.333 [2024-12-05 14:29:31.726153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.333 [2024-12-05 14:29:31.726165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.730015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.730045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.730056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.733280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.733310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.733321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.736833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.736873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.736885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.740461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.740491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.740504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.744299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.744341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.747872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.747902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.747912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.750889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.750919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.750930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.754652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.754683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.754694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.759023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.759053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.759064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.762333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.762363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.762373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.765869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.765899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.765910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.769113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.769154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.769165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.773309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.773338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.773349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.776354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.776383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.776394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.780344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.780375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.780386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.783760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.783791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.783813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.787615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.787645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.787656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.791379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.791410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.791421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.794866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.794896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.794908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.798384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.798414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.798426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.801970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.802001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.802012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.805254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.805283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.805295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.808661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.808691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.808702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.812304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.812336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.812347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.816265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.816295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.816306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.820038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.820068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.334 [2024-12-05 14:29:31.820079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.334 [2024-12-05 14:29:31.823851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.334 [2024-12-05 14:29:31.823879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.823890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.827182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.827213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.827223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.830672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.830703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.830713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.834414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.834443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.834454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.837549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.837579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.837590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.840730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.840761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.840772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.844149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.844192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.844203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.847865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.847894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.847904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.851570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.851597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.851609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.856140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.856170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.856182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.860562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.860591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.860602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.864720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.864750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.864763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.867872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.867914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.867925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.872421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.872451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.872463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.875922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.875949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.875979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.880144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.880178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.880190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.883605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.883650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.883662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.886793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.886843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.886861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.890729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.894404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.894436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.894461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.898449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.898483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.898507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.902192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.902224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.902246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.905860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.905891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.909781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.909826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.909846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.913462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.913496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.913520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.917445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.917479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.917491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.921442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.921474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.335 [2024-12-05 14:29:31.921485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.335 [2024-12-05 14:29:31.925528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.335 [2024-12-05 14:29:31.925560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.925581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.929899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.929933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.929957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.933880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.933912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.933935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.936798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.936852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.936877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.941005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.941050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.941061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.943910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.943948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.943977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.947472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.947505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.947517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.951406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.951439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.951464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.954799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.954842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.954865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.958329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.958362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.958386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.961850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.961883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.961906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.965938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.965971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.965994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.968987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.969020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.969040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.972898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.972929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.972941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.336 [2024-12-05 14:29:31.976584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.336 [2024-12-05 14:29:31.976617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.336 [2024-12-05 14:29:31.976628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:31.980265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:31.980331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:31.980358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:31.983749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:31.983799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:31.983837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:31.987649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:31.987683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:31.987705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:31.991125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:31.991171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:31.991182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:31.994701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:31.994734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:31.994757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:31.998979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:31.999013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:31.999035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.002263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.002298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.002320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.006271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.006304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.009160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.009193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.009204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.013384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.013418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.013428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.017176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.017221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.017233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.020841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.020885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.020897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.023728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.023761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.023782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.026827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.026862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.026882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.030928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.030963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.030982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.034260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.034294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.034307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.037830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.037863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.037887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.041531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.041563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.041575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.044949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.044981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.044993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.048606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.048640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.048651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.052176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.052210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.052221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.597 [2024-12-05 14:29:32.056168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.597 [2024-12-05 14:29:32.056201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.597 [2024-12-05 14:29:32.056213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.059790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.059832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.059851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.063164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.063199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.063219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.066859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.066891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.066912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.069779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.069823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.069844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.073868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.073900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.077777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.077821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.077843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.081297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.081329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.081351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.084898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.084931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.084953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.087979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.088013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.088032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.091455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.091488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.091513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.095685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.095718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.095741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.099307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.099341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.099351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.103417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.103449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.103461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.106763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.106795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.106829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.110146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.110189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.110211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.113695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.113728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.113750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.117033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.117077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.117097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.120982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.121027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.121039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.124929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.124962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.124984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.128035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.128068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.128088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.131827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.131857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.131880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.135818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.135851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.135874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.138754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.138788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.138817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.143231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.143264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.143275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.146758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.146791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.146823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.150292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.150325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.150350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.598 [2024-12-05 14:29:32.154393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.598 [2024-12-05 14:29:32.154425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.598 [2024-12-05 14:29:32.154436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.157709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.157742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.160656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.160689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.160700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.164460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.164493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.164504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.168115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.168148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.168170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.172231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.172264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.172285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.176210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.176242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.176262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.179812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.179843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.179863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.183281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.183314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.183326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.186818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.186850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.186873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.190597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.190630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.190653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.194164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.194197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.194219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.197479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.197512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.197524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.200585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.200618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.200640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.204681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.204713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.204736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.209129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.209162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.209173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.212352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.212384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.212408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.216416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.216449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.216473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.220001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.220033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.220044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.223216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.223262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.223282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.227229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.227263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.227285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.230566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.230599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.230621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.234066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.234098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.234109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.599 [2024-12-05 14:29:32.238146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.599 [2024-12-05 14:29:32.238183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.599 [2024-12-05 14:29:32.238195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.242209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.242244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.242255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.247322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.247355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.247368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.250157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.250191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.250203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.254448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.254491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.254512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.258333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.258378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.258389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.262595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.262639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.262659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.266430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.266465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.266476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.270096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.270129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.270150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.273730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.273775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.273796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.277595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.277638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.277661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.281518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.281563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.281584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.285868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.285911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.285932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.288991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.289034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.289055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.292734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.292776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.292798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.297434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.297468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.297491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.300768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.300824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.300837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.304710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.304743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.308061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.308108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.311992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.312027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.312038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.315212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.315244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.315258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.319946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.320015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.860 [2024-12-05 14:29:32.320028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.860 [2024-12-05 14:29:32.323152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.860 [2024-12-05 14:29:32.323189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.323209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.326531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.326564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.326575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.330248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.330282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.330293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.333833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.333877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.333897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.337718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.337763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.341462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.341503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.341527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.344910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.344974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.348115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.348149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.348161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.351766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.351823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.351836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.355726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.355767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.355792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.359317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.359363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.359383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.362923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.362968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.362988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.366924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.366969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.366989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.370510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.370539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.370557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.374493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.374537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.374560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.378688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.378731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.378754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.382022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.382056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.385382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.385417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.385429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.390228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.390271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.390283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.394723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.394761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.394773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.398577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.398618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.398630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.402661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.402702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.402713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.406541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.406582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.406593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.410636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.410665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.410676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.413246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.413274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.413292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.417893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.417924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.417936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.421159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.861 [2024-12-05 14:29:32.421189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.861 [2024-12-05 14:29:32.421200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.861 [2024-12-05 14:29:32.425394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.425426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.425437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.428947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.428977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.428988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.433204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.433245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.433255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.436820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.436860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.436871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.439623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.439664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.439675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.443647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.443690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.443701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.447245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.447287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.447299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.450653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.450684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.450695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.453603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.453633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.453644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.862 [2024-12-05 14:29:32.457895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb0d10) 00:23:26.862 [2024-12-05 14:29:32.457936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.862 [2024-12-05 14:29:32.457947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.862 00:23:26.862 Latency(us) 00:23:26.862 [2024-12-05T14:29:32.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.862 [2024-12-05T14:29:32.510Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:26.862 nvme0n1 : 2.00 8343.24 1042.91 0.00 0.00 1914.93 513.86 5332.25 00:23:26.862 [2024-12-05T14:29:32.510Z] =================================================================================================================== 00:23:26.862 [2024-12-05T14:29:32.510Z] Total : 8343.24 1042.91 0.00 0.00 1914.93 513.86 5332.25 00:23:26.862 0 00:23:26.862 14:29:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:26.862 14:29:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:26.862 14:29:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:26.862 | .driver_specific 00:23:26.862 | .nvme_error 00:23:26.862 | .status_code 00:23:26.862 | .command_transient_transport_error' 00:23:26.862 14:29:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:27.430 14:29:32 -- host/digest.sh@71 -- # (( 538 > 0 )) 00:23:27.430 14:29:32 -- host/digest.sh@73 -- # killprocess 97918 00:23:27.430 14:29:32 -- common/autotest_common.sh@936 -- # '[' -z 97918 ']' 00:23:27.430 14:29:32 -- common/autotest_common.sh@940 -- # kill -0 97918 00:23:27.430 14:29:32 -- common/autotest_common.sh@941 -- # uname 00:23:27.430 14:29:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:27.430 14:29:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97918 00:23:27.430 14:29:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:27.430 14:29:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:27.430 killing process with pid 97918 00:23:27.430 14:29:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97918' 00:23:27.430 Received shutdown signal, test time was about 2.000000 seconds 00:23:27.430 00:23:27.430 Latency(us) 00:23:27.430 [2024-12-05T14:29:33.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.430 [2024-12-05T14:29:33.078Z] =================================================================================================================== 00:23:27.430 [2024-12-05T14:29:33.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.430 14:29:32 -- common/autotest_common.sh@955 -- # kill 97918 00:23:27.430 14:29:32 -- common/autotest_common.sh@960 -- # wait 97918 00:23:27.430 14:29:33 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:27.430 14:29:33 -- host/digest.sh@54 -- # local rw bs qd 00:23:27.430 14:29:33 -- host/digest.sh@56 -- # rw=randwrite 00:23:27.430 14:29:33 -- host/digest.sh@56 -- # bs=4096 00:23:27.430 14:29:33 -- host/digest.sh@56 -- # qd=128 00:23:27.689 14:29:33 -- host/digest.sh@58 -- # bperfpid=98003 00:23:27.689 14:29:33 -- host/digest.sh@60 -- # waitforlisten 98003 /var/tmp/bperf.sock 00:23:27.689 14:29:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:27.689 14:29:33 -- common/autotest_common.sh@829 -- # '[' -z 98003 ']' 00:23:27.689 14:29:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:27.689 14:29:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.689 14:29:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:27.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:27.689 14:29:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.689 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:23:27.689 [2024-12-05 14:29:33.132612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:27.689 [2024-12-05 14:29:33.132722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98003 ] 00:23:27.689 [2024-12-05 14:29:33.272177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.948 [2024-12-05 14:29:33.347522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.515 14:29:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.515 14:29:34 -- common/autotest_common.sh@862 -- # return 0 00:23:28.516 14:29:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:28.516 14:29:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:28.774 14:29:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:28.774 14:29:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.774 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:23:28.774 14:29:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.774 14:29:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:28.774 14:29:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:29.033 nvme0n1 00:23:29.033 14:29:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:29.033 14:29:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.033 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:23:29.033 14:29:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.033 14:29:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:29.033 14:29:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:29.292 Running I/O for 2 seconds... 00:23:29.292 [2024-12-05 14:29:34.780805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eea00 00:23:29.292 [2024-12-05 14:29:34.781714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.781752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.790632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ea680 00:23:29.292 [2024-12-05 14:29:34.791219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.791249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.800103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2948 00:23:29.292 [2024-12-05 14:29:34.800475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.800501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.811075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e95a0 00:23:29.292 [2024-12-05 14:29:34.812025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.812054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.819708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190de8a8 00:23:29.292 [2024-12-05 14:29:34.820278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.820307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.829683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:29.292 [2024-12-05 14:29:34.830388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.830416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.838837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef6a8 00:23:29.292 [2024-12-05 14:29:34.839719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.839747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.849047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef6a8 00:23:29.292 [2024-12-05 14:29:34.849732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.849759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.856646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f57b0 00:23:29.292 [2024-12-05 14:29:34.857726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.857753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.867487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6b70 00:23:29.292 [2024-12-05 14:29:34.868483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.868511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.876237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190edd58 00:23:29.292 [2024-12-05 14:29:34.877116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.877149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.886306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe720 00:23:29.292 [2024-12-05 14:29:34.887236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.887274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:29.292 [2024-12-05 14:29:34.895772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eea00 00:23:29.292 [2024-12-05 14:29:34.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.292 [2024-12-05 14:29:34.896324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:29.293 [2024-12-05 14:29:34.905054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e5658 00:23:29.293 [2024-12-05 14:29:34.905873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.293 [2024-12-05 14:29:34.905900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:29.293 [2024-12-05 14:29:34.914652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e4578 00:23:29.293 [2024-12-05 14:29:34.915334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.293 [2024-12-05 14:29:34.915363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:29.293 [2024-12-05 14:29:34.923789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190de470 00:23:29.293 [2024-12-05 14:29:34.924357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.293 [2024-12-05 14:29:34.924386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:29.293 [2024-12-05 14:29:34.932901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fc998 00:23:29.293 [2024-12-05 14:29:34.933416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.293 [2024-12-05 14:29:34.933445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.941995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef270 00:23:29.552 [2024-12-05 14:29:34.942871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.942911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.951189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f5be8 00:23:29.552 [2024-12-05 14:29:34.952155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.952194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.959638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f6458 00:23:29.552 [2024-12-05 14:29:34.960005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.960030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.969702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e8088 00:23:29.552 [2024-12-05 14:29:34.970202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.970236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.979058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eb328 00:23:29.552 [2024-12-05 14:29:34.979951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.980005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.988561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f6020 00:23:29.552 [2024-12-05 14:29:34.989456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.989484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:34.997890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ee190 00:23:29.552 [2024-12-05 14:29:34.998441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:34.998467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:35.007062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f1430 00:23:29.552 [2024-12-05 14:29:35.007623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:35.007651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:35.016206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e84c0 00:23:29.552 [2024-12-05 14:29:35.016758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.552 [2024-12-05 14:29:35.016789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:29.552 [2024-12-05 14:29:35.025404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7538 00:23:29.552 [2024-12-05 14:29:35.025897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.025930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.034520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7970 00:23:29.553 [2024-12-05 14:29:35.035005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.035030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.043685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ee5c8 00:23:29.553 [2024-12-05 14:29:35.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.044203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.052766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e1710 00:23:29.553 [2024-12-05 14:29:35.053313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.053341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.063148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eaab8 00:23:29.553 [2024-12-05 14:29:35.064282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.064310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.071673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef270 00:23:29.553 [2024-12-05 14:29:35.072737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.072768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.079504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eaef0 00:23:29.553 [2024-12-05 14:29:35.080342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.080390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.088824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ddc00 00:23:29.553 [2024-12-05 14:29:35.088961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.088980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.100497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2948 00:23:29.553 [2024-12-05 14:29:35.101517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.109903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6fa8 00:23:29.553 [2024-12-05 14:29:35.111303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.111331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.117543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2948 00:23:29.553 [2024-12-05 14:29:35.118359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.118394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.127254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fac10 00:23:29.553 [2024-12-05 14:29:35.128515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.128542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.136616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f3a28 00:23:29.553 [2024-12-05 14:29:35.137749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.137776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.145740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6b70 00:23:29.553 [2024-12-05 14:29:35.146690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.146717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.155581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e0a68 00:23:29.553 [2024-12-05 14:29:35.156210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.156238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.164827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e1f80 00:23:29.553 [2024-12-05 14:29:35.165533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.165560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.173872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7c50 00:23:29.553 [2024-12-05 14:29:35.174579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.174605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.183039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2510 00:23:29.553 [2024-12-05 14:29:35.183735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.183762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:29.553 [2024-12-05 14:29:35.192285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f5be8 00:23:29.553 [2024-12-05 14:29:35.192929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.553 [2024-12-05 14:29:35.192983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.201396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190df118 00:23:29.811 [2024-12-05 14:29:35.202213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.202266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.209496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190df550 00:23:29.811 [2024-12-05 14:29:35.209720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.209739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.220312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e88f8 00:23:29.811 [2024-12-05 14:29:35.221145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.221183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.228611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f96f8 00:23:29.811 [2024-12-05 14:29:35.229504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.229532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.237776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190efae0 00:23:29.811 [2024-12-05 14:29:35.238489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.238516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.246917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:29.811 [2024-12-05 14:29:35.247690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.247717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.256224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7538 00:23:29.811 [2024-12-05 14:29:35.257196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.257242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.265454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7c50 00:23:29.811 [2024-12-05 14:29:35.265769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.265792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.274755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fbcf0 00:23:29.811 [2024-12-05 14:29:35.275521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.275549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.285653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fac10 00:23:29.811 [2024-12-05 14:29:35.287098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.287135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.295672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e4de8 00:23:29.811 [2024-12-05 14:29:35.296701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.296739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.304395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fc998 00:23:29.811 [2024-12-05 14:29:35.304897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.304926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.313551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7100 00:23:29.811 [2024-12-05 14:29:35.314463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.314490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.322717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed4e8 00:23:29.811 [2024-12-05 14:29:35.324023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.811 [2024-12-05 14:29:35.324061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:29.811 [2024-12-05 14:29:35.332215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e5a90 00:23:29.811 [2024-12-05 14:29:35.332941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.332971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.341506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190feb58 00:23:29.812 [2024-12-05 14:29:35.342212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.342239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.350980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed0b0 00:23:29.812 [2024-12-05 14:29:35.351829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.351861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.361738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6738 00:23:29.812 [2024-12-05 14:29:35.362387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.362414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.371782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e5658 00:23:29.812 [2024-12-05 14:29:35.372319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.372347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.381375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e5a90 00:23:29.812 [2024-12-05 14:29:35.382486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.382514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.391074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eaef0 00:23:29.812 [2024-12-05 14:29:35.391623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.391648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.400360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7100 00:23:29.812 [2024-12-05 14:29:35.401471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.401499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.408924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7100 00:23:29.812 [2024-12-05 14:29:35.409723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.409750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.418625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eaab8 00:23:29.812 [2024-12-05 14:29:35.419746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.419773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.428077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fbcf0 00:23:29.812 [2024-12-05 14:29:35.428908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.428945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.437365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f96f8 00:23:29.812 [2024-12-05 14:29:35.438108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.438134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.446447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fa3a0 00:23:29.812 [2024-12-05 14:29:35.446941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.446966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:29.812 [2024-12-05 14:29:35.455556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fef90 00:23:29.812 [2024-12-05 14:29:35.456218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.812 [2024-12-05 14:29:35.456248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.464518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed0b0 00:23:30.070 [2024-12-05 14:29:35.464827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.070 [2024-12-05 14:29:35.464857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.473621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7970 00:23:30.070 [2024-12-05 14:29:35.474137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.070 [2024-12-05 14:29:35.474165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.483182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ee190 00:23:30.070 [2024-12-05 14:29:35.484477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.070 [2024-12-05 14:29:35.484504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.493170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f1ca0 00:23:30.070 [2024-12-05 14:29:35.494511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.070 [2024-12-05 14:29:35.494538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.502552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:30.070 [2024-12-05 14:29:35.503139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.070 [2024-12-05 14:29:35.503164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.510637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f4298 00:23:30.070 [2024-12-05 14:29:35.511490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.070 [2024-12-05 14:29:35.511517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.070 [2024-12-05 14:29:35.520025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:30.070 [2024-12-05 14:29:35.520239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.520259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.529118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f0bc0 00:23:30.071 [2024-12-05 14:29:35.529288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.538383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fa7d8 00:23:30.071 [2024-12-05 14:29:35.538715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.538741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.547494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e27f0 00:23:30.071 [2024-12-05 14:29:35.547642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.547661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.556769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd208 00:23:30.071 [2024-12-05 14:29:35.557480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.557508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.565971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ea680 00:23:30.071 [2024-12-05 14:29:35.566918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.566945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.574961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f0788 00:23:30.071 [2024-12-05 14:29:35.575084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.575103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.584043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ecc78 00:23:30.071 [2024-12-05 14:29:35.584172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.584191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.593543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef6a8 00:23:30.071 [2024-12-05 14:29:35.594237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.594274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.602589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f57b0 00:23:30.071 [2024-12-05 14:29:35.603673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.603700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.611985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fac10 00:23:30.071 [2024-12-05 14:29:35.612313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.612337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.622961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eb760 00:23:30.071 [2024-12-05 14:29:35.624360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.624387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.632073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eff18 00:23:30.071 [2024-12-05 14:29:35.633512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.633538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.640488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe720 00:23:30.071 [2024-12-05 14:29:35.641434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.641461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.649701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f92c0 00:23:30.071 [2024-12-05 14:29:35.650394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.650421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.658839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190feb58 00:23:30.071 [2024-12-05 14:29:35.659507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.659534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.668205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e4140 00:23:30.071 [2024-12-05 14:29:35.668756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.668783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.677886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e73e0 00:23:30.071 [2024-12-05 14:29:35.678544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.678570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.686998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2d80 00:23:30.071 [2024-12-05 14:29:35.687608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.687635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.695969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eea00 00:23:30.071 [2024-12-05 14:29:35.696663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.696690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.705844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e8088 00:23:30.071 [2024-12-05 14:29:35.706356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.706382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.071 [2024-12-05 14:29:35.714017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f4f40 00:23:30.071 [2024-12-05 14:29:35.715530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.071 [2024-12-05 14:29:35.715558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.329 [2024-12-05 14:29:35.723297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f92c0 00:23:30.330 [2024-12-05 14:29:35.724717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.724744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.733379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f57b0 00:23:30.330 [2024-12-05 14:29:35.733948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.733976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.742119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190de8a8 00:23:30.330 [2024-12-05 14:29:35.743428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.750746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190feb58 00:23:30.330 [2024-12-05 14:29:35.751495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.751523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.759905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:30.330 [2024-12-05 14:29:35.760684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.760711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.770028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7c50 00:23:30.330 [2024-12-05 14:29:35.770518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.770544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.780122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ee5c8 00:23:30.330 [2024-12-05 14:29:35.780779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.780815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.788508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f57b0 00:23:30.330 [2024-12-05 14:29:35.789369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.789402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.797260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e12d8 00:23:30.330 [2024-12-05 14:29:35.797436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.797455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.806741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e1b48 00:23:30.330 [2024-12-05 14:29:35.807361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.807388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.815784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e4140 00:23:30.330 [2024-12-05 14:29:35.816927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.816954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.825709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fcdd0 00:23:30.330 [2024-12-05 14:29:35.826958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.835169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e84c0 00:23:30.330 [2024-12-05 14:29:35.836046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.836073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.843596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd640 00:23:30.330 [2024-12-05 14:29:35.844511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.844538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.852940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fbcf0 00:23:30.330 [2024-12-05 14:29:35.853406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.853434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.863858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7c50 00:23:30.330 [2024-12-05 14:29:35.865346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.865374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.872978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f3e60 00:23:30.330 [2024-12-05 14:29:35.874428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.874454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.880909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:30.330 [2024-12-05 14:29:35.881915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.881954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.890236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fdeb0 00:23:30.330 [2024-12-05 14:29:35.891332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.891372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.900342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e27f0 00:23:30.330 [2024-12-05 14:29:35.901439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.901470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.910442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:30.330 [2024-12-05 14:29:35.911359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.911396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.921234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eb328 00:23:30.330 [2024-12-05 14:29:35.922012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.922045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.932376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7818 00:23:30.330 [2024-12-05 14:29:35.933605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.933643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.939043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e38d0 00:23:30.330 [2024-12-05 14:29:35.940087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.940114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.948411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eb760 00:23:30.330 [2024-12-05 14:29:35.949159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.949186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.330 [2024-12-05 14:29:35.958375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f31b8 00:23:30.330 [2024-12-05 14:29:35.958675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.330 [2024-12-05 14:29:35.958698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.331 [2024-12-05 14:29:35.967710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6b70 00:23:30.331 [2024-12-05 14:29:35.968932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.331 [2024-12-05 14:29:35.968970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:35.977969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f20d8 00:23:30.589 [2024-12-05 14:29:35.978580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:35.978607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:35.988540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe720 00:23:30.589 [2024-12-05 14:29:35.989311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:35.989337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:35.997058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f8618 00:23:30.589 [2024-12-05 14:29:35.998015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:35.998050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.006278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7538 00:23:30.589 [2024-12-05 14:29:36.006570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.006593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.015452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f3a28 00:23:30.589 [2024-12-05 14:29:36.015885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.015909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.025855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef6a8 00:23:30.589 [2024-12-05 14:29:36.027138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.027165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.035736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd640 00:23:30.589 [2024-12-05 14:29:36.036865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.036891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.045666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e5220 00:23:30.589 [2024-12-05 14:29:36.047103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.047141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.054280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e1b48 00:23:30.589 [2024-12-05 14:29:36.055093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.055130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.062248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7818 00:23:30.589 [2024-12-05 14:29:36.063282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.589 [2024-12-05 14:29:36.063309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.589 [2024-12-05 14:29:36.071342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e99d8 00:23:30.590 [2024-12-05 14:29:36.072324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.072352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.081339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eee38 00:23:30.590 [2024-12-05 14:29:36.081597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.081616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.090674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eea00 00:23:30.590 [2024-12-05 14:29:36.090952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.090971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.099772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e1f80 00:23:30.590 [2024-12-05 14:29:36.100032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.100051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.108974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f6cc8 00:23:30.590 [2024-12-05 14:29:36.109316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.109340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.118131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ec408 00:23:30.590 [2024-12-05 14:29:36.118448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.118472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.127248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f4298 00:23:30.590 [2024-12-05 14:29:36.127543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.127567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.136874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fa3a0 00:23:30.590 [2024-12-05 14:29:36.137164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.137188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.146202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe2e8 00:23:30.590 [2024-12-05 14:29:36.146457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.146485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.155227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ed920 00:23:30.590 [2024-12-05 14:29:36.155446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.155471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.164405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e5658 00:23:30.590 [2024-12-05 14:29:36.164609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.164628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.173388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e2c28 00:23:30.590 [2024-12-05 14:29:36.173970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.173996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.182913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e1710 00:23:30.590 [2024-12-05 14:29:36.184070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.184097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.192059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fac10 00:23:30.590 [2024-12-05 14:29:36.192917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.192943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.201037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e27f0 00:23:30.590 [2024-12-05 14:29:36.201504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.201532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.210386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f3a28 00:23:30.590 [2024-12-05 14:29:36.211116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.211146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.220502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eb760 00:23:30.590 [2024-12-05 14:29:36.220929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.220965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.590 [2024-12-05 14:29:36.230532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ec408 00:23:30.590 [2024-12-05 14:29:36.231377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.590 [2024-12-05 14:29:36.231406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.238910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e3d08 00:23:30.849 [2024-12-05 14:29:36.239220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.239256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.248823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eff18 00:23:30.849 [2024-12-05 14:29:36.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.249321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.257251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f92c0 00:23:30.849 [2024-12-05 14:29:36.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.258020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.266647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ec408 00:23:30.849 [2024-12-05 14:29:36.267000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.276591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd640 00:23:30.849 [2024-12-05 14:29:36.277091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.277124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.285771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd208 00:23:30.849 [2024-12-05 14:29:36.286677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.286704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.294999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd640 00:23:30.849 [2024-12-05 14:29:36.295562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.295589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.304026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe2e8 00:23:30.849 [2024-12-05 14:29:36.304653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.304679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.313385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e95a0 00:23:30.849 [2024-12-05 14:29:36.313915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.313944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.322364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e99d8 00:23:30.849 [2024-12-05 14:29:36.322870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.322897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.849 [2024-12-05 14:29:36.331400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f3a28 00:23:30.849 [2024-12-05 14:29:36.331886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.849 [2024-12-05 14:29:36.331911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.340487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f4b08 00:23:30.850 [2024-12-05 14:29:36.340943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.340980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.349515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ec408 00:23:30.850 [2024-12-05 14:29:36.349981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.358311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ebb98 00:23:30.850 [2024-12-05 14:29:36.359319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.359345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.367519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f6890 00:23:30.850 [2024-12-05 14:29:36.367792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.367823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.377436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eaab8 00:23:30.850 [2024-12-05 14:29:36.378030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.378057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.387619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fd208 00:23:30.850 [2024-12-05 14:29:36.388904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.388941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.397373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eea00 00:23:30.850 [2024-12-05 14:29:36.397880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.397902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.406567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e7c50 00:23:30.850 [2024-12-05 14:29:36.407275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.407303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.415655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e4140 00:23:30.850 [2024-12-05 14:29:36.416705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.416732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.424967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190efae0 00:23:30.850 [2024-12-05 14:29:36.425744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.425770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.434630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eaab8 00:23:30.850 [2024-12-05 14:29:36.435039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.435063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.445819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190df988 00:23:30.850 [2024-12-05 14:29:36.446840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.446865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.453914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f5be8 00:23:30.850 [2024-12-05 14:29:36.455022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.455049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.463437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f1ca0 00:23:30.850 [2024-12-05 14:29:36.464737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.464764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.472012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eee38 00:23:30.850 [2024-12-05 14:29:36.473434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.473463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.481917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eea00 00:23:30.850 [2024-12-05 14:29:36.482549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.482577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.850 [2024-12-05 14:29:36.491048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e01f8 00:23:30.850 [2024-12-05 14:29:36.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.850 [2024-12-05 14:29:36.491935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:31.109 [2024-12-05 14:29:36.499872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7da8 00:23:31.109 [2024-12-05 14:29:36.501001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.109 [2024-12-05 14:29:36.501028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:31.109 [2024-12-05 14:29:36.509532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f8618 00:23:31.109 [2024-12-05 14:29:36.509917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.109 [2024-12-05 14:29:36.509940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:31.109 [2024-12-05 14:29:36.518817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190eb760 00:23:31.109 [2024-12-05 14:29:36.519320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.109 [2024-12-05 14:29:36.519347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:31.109 [2024-12-05 14:29:36.527913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190de470 00:23:31.109 [2024-12-05 14:29:36.528461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.109 [2024-12-05 14:29:36.528489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:31.109 [2024-12-05 14:29:36.536962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e99d8 00:23:31.109 [2024-12-05 14:29:36.537457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.537484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.545941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fc560 00:23:31.110 [2024-12-05 14:29:36.546374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.546397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.555013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f1ca0 00:23:31.110 [2024-12-05 14:29:36.555418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.555450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.564090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e88f8 00:23:31.110 [2024-12-05 14:29:36.564512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.564547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.573209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2d80 00:23:31.110 [2024-12-05 14:29:36.573630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.573665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.582541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f7970 00:23:31.110 [2024-12-05 14:29:36.583303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.583332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.591626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e01f8 00:23:31.110 [2024-12-05 14:29:36.592741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.592769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.600856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e4de8 00:23:31.110 [2024-12-05 14:29:36.601895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.601920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.609714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e27f0 00:23:31.110 [2024-12-05 14:29:36.610492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.610519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.619604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe2e8 00:23:31.110 [2024-12-05 14:29:36.619988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.620012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.628817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e3498 00:23:31.110 [2024-12-05 14:29:36.629322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.629348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.637828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e73e0 00:23:31.110 [2024-12-05 14:29:36.638275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.638318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.649174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fa3a0 00:23:31.110 [2024-12-05 14:29:36.649594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.649614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.663451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ebfd0 00:23:31.110 [2024-12-05 14:29:36.664002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.664021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.674771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190dfdc0 00:23:31.110 [2024-12-05 14:29:36.675962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.676010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.684453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f2948 00:23:31.110 [2024-12-05 14:29:36.685363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.685390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.693547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ddc00 00:23:31.110 [2024-12-05 14:29:36.694075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.694099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.703122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6fa8 00:23:31.110 [2024-12-05 14:29:36.704090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.704126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.711350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fe720 00:23:31.110 [2024-12-05 14:29:36.711653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.711677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.721021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190dfdc0 00:23:31.110 [2024-12-05 14:29:36.722266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.722293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.729857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190ef6a8 00:23:31.110 [2024-12-05 14:29:36.730738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.730766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.738913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190f5378 00:23:31.110 [2024-12-05 14:29:36.739794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.739830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:31.110 [2024-12-05 14:29:36.749186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e8088 00:23:31.110 [2024-12-05 14:29:36.749836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.110 [2024-12-05 14:29:36.749861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:31.369 [2024-12-05 14:29:36.759394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190e6738 00:23:31.369 [2024-12-05 14:29:36.760934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.369 [2024-12-05 14:29:36.760959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:31.369 [2024-12-05 14:29:36.769107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf490e0) with pdu=0x2000190fb048 00:23:31.369 [2024-12-05 14:29:36.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.369 [2024-12-05 14:29:36.769950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:31.369 00:23:31.369 Latency(us) 00:23:31.369 [2024-12-05T14:29:37.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.369 [2024-12-05T14:29:37.017Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:31.369 nvme0n1 : 2.01 27118.88 105.93 0.00 0.00 4714.07 1876.71 13345.51 00:23:31.369 [2024-12-05T14:29:37.017Z] =================================================================================================================== 00:23:31.369 [2024-12-05T14:29:37.017Z] Total : 27118.88 105.93 0.00 0.00 4714.07 1876.71 13345.51 00:23:31.369 0 00:23:31.369 14:29:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:31.369 14:29:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:31.369 14:29:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:31.369 | .driver_specific 00:23:31.369 | .nvme_error 00:23:31.369 | .status_code 00:23:31.369 | .command_transient_transport_error' 00:23:31.369 14:29:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:31.628 14:29:37 -- host/digest.sh@71 -- # (( 213 > 0 )) 00:23:31.628 14:29:37 -- host/digest.sh@73 -- # killprocess 98003 00:23:31.628 14:29:37 -- common/autotest_common.sh@936 -- # '[' -z 98003 ']' 00:23:31.628 14:29:37 -- common/autotest_common.sh@940 -- # kill -0 98003 00:23:31.628 14:29:37 -- common/autotest_common.sh@941 -- # uname 00:23:31.628 14:29:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.628 14:29:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98003 00:23:31.628 14:29:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:31.628 14:29:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:31.628 killing process with pid 98003 00:23:31.628 14:29:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98003' 00:23:31.628 Received shutdown signal, test time was about 2.000000 seconds 00:23:31.628 00:23:31.628 Latency(us) 00:23:31.628 [2024-12-05T14:29:37.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.628 [2024-12-05T14:29:37.276Z] =================================================================================================================== 00:23:31.628 [2024-12-05T14:29:37.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.628 14:29:37 -- common/autotest_common.sh@955 -- # kill 98003 00:23:31.628 14:29:37 -- common/autotest_common.sh@960 -- # wait 98003 00:23:31.887 14:29:37 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:31.887 14:29:37 -- host/digest.sh@54 -- # local rw bs qd 00:23:31.887 14:29:37 -- host/digest.sh@56 -- # rw=randwrite 00:23:31.887 14:29:37 -- host/digest.sh@56 -- # bs=131072 00:23:31.887 14:29:37 -- host/digest.sh@56 -- # qd=16 00:23:31.887 14:29:37 -- host/digest.sh@58 -- # bperfpid=98093 00:23:31.887 14:29:37 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:31.887 14:29:37 -- host/digest.sh@60 -- # waitforlisten 98093 /var/tmp/bperf.sock 00:23:31.887 14:29:37 -- common/autotest_common.sh@829 -- # '[' -z 98093 ']' 00:23:31.887 14:29:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:31.887 14:29:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:31.887 14:29:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:31.887 14:29:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.887 14:29:37 -- common/autotest_common.sh@10 -- # set +x 00:23:31.887 [2024-12-05 14:29:37.414078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:31.887 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:31.887 Zero copy mechanism will not be used. 00:23:31.887 [2024-12-05 14:29:37.414180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98093 ] 00:23:32.146 [2024-12-05 14:29:37.552625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.146 [2024-12-05 14:29:37.622650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.083 14:29:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.083 14:29:38 -- common/autotest_common.sh@862 -- # return 0 00:23:33.083 14:29:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:33.083 14:29:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:33.083 14:29:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:33.083 14:29:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.083 14:29:38 -- common/autotest_common.sh@10 -- # set +x 00:23:33.083 14:29:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.083 14:29:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:33.083 14:29:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:33.342 nvme0n1 00:23:33.602 14:29:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:33.602 14:29:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.602 14:29:38 -- common/autotest_common.sh@10 -- # set +x 00:23:33.602 14:29:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.602 14:29:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:33.602 14:29:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:33.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:33.602 Zero copy mechanism will not be used. 00:23:33.602 Running I/O for 2 seconds... 00:23:33.602 [2024-12-05 14:29:39.096978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.097329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.097364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.101319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.101603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.101634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.105801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.105990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.106012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.110010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.110119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.114237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.114385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.114406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.118369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.118458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.118479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.122657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.122787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.122821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.126893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.127111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.127134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.131071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.131220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.131241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.135280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.135419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.135440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.139474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.139616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.139637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.143817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.143909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.143930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.148122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.602 [2024-12-05 14:29:39.148220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.602 [2024-12-05 14:29:39.148241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.602 [2024-12-05 14:29:39.152428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.152532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.156703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.156846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.156876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.161109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.161326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.161347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.165489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.165615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.165636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.169743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.169851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.169872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.173846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.173951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.173971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.178204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.178282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.178303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.182431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.182522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.182543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.186728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.186842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.186863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.190963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.191090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.195266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.195414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.195434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.199474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.199643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.199662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.203727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.203852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.203873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.208006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.208131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.208151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.212284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.212396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.212417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.216471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.216584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.216604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.220828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.220965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.220986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.225037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.225197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.225217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.229397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.229547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.229568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.233534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.233656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.233676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.237952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.238077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.238098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.603 [2024-12-05 14:29:39.242206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.603 [2024-12-05 14:29:39.242309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.603 [2024-12-05 14:29:39.242345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.246605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.246756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.246778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.250987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.251074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.251095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.255315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.255441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.255461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.259497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.259679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.259699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.263748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.263879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.263899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.268049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.268225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.268247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.272349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.272489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.272510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.276577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.276689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.276709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.280877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.281036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.281057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.285105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.285201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.285221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.289368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.289506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.289526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.293696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.293835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.293863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.297928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.298043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.298063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.302172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.302363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.302383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.306456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.306577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.306598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.310721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.310900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.310921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.315050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.315203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.315223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.319212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.319325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.319345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.323454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.323555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.323575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.327658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.327877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.327899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.332073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.332185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.864 [2024-12-05 14:29:39.332206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.864 [2024-12-05 14:29:39.336408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.864 [2024-12-05 14:29:39.336589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.336609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.340747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.340878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.340899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.344947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.345077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.345097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.349202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.349333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.349353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.353314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.353388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.353408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.357475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.357589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.357610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.361718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.361895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.361915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.365998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.366102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.366122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.370211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.370337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.370358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.374455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.374558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.374578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.378689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.378774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.378794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.383023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.383184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.383205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.387266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.387398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.387419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.391468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.391636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.391657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.395780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.395966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.395991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.399986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.400113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.400134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.404135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.404280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.404300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.408499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.408641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.408662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.412858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.412986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.413006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.417177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.417351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.417372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.421473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.421627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.421647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.425719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.425875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.425897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.430196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.430354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.430375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.434771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.434919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.434940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.439541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.439665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.439685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.444433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.444567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.444588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.449320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.449413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.449433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.865 [2024-12-05 14:29:39.453949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.865 [2024-12-05 14:29:39.454146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.865 [2024-12-05 14:29:39.454180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.458634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.458758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.458778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.463345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.463487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.463507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.468147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.468353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.468373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.472832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.473040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.473061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.477327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.477499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.477519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.481689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.481857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.486022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.486148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.486168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.490496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.490664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.490685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.494867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.495013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.495032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.499143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.499296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.499317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.866 [2024-12-05 14:29:39.503474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:33.866 [2024-12-05 14:29:39.503624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.866 [2024-12-05 14:29:39.503660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.507998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.508099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.508121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.512644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.512817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.512838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.517109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.517253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.517273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.521395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.521529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.521549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.525921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.526063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.526084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.530213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.530305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.530325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.534493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.534612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.534632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.539041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.539202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.539222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.543332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.543438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.543459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.547646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.547823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.547843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.552034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.552136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.552157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.556531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.556629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.556649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.560851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.561021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.561042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.565098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.565231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.565251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.569600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.569719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.569739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.573935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.574100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.574121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.578244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.578356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.578377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.582953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.583116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.583137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.587222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.587396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.587416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.591454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.591584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.591604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.596042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.596180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.129 [2024-12-05 14:29:39.596201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.129 [2024-12-05 14:29:39.600285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.129 [2024-12-05 14:29:39.600419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.600439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.604582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.609142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.609285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.609306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.613633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.613720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.613740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.618032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.618189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.618210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.622307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.622450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.622471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.626549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.626652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.626672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.630794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.630968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.630988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.634970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.635065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.635085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.639180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.639304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.639324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.643350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.643474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.643494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.647459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.647603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.647623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.651754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.651921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.651941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.656003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.656081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.656101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.660191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.660328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.660349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.664549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.664683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.668701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.668831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.668852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.672980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.673145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.673166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.677233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.677416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.677435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.681421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.681542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.681563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.685653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.685777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.685798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.689876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.689974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.689995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.694012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.694139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.694158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.698282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.698444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.698465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.702467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.702544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.702564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.706722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.706861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.706882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.710846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.710948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.710968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.715018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.130 [2024-12-05 14:29:39.715111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.130 [2024-12-05 14:29:39.715132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.130 [2024-12-05 14:29:39.719198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.719364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.719384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.723449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.723545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.723564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.727705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.727837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.727858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.732069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.732210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.732243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.736279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.736393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.736413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.740488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.740636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.744774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.744922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.744943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.748907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.749040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.749060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.753182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.753330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.753350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.757346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.757447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.757467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.761826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.761952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.761972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.766098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.766223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.766243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.131 [2024-12-05 14:29:39.770519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.131 [2024-12-05 14:29:39.770636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.131 [2024-12-05 14:29:39.770656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.774899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.775056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.775076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.779196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.779320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.779341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.783561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.783653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.783673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.787845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.788052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.788075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.792115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.792213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.792235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.796469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.796610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.800687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.800823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.800857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.805018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.805133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.805155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.809231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.809378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.813504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.813605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.813626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.817693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.817858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.817878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.822008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.822140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.822160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.826330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.826407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.826428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.830491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.830595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.830616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.834854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.834999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.835020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.839118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.839254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.843302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.843429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.843450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.847656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.847778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.847799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.851876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.851981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.852001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.856188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.393 [2024-12-05 14:29:39.856366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.393 [2024-12-05 14:29:39.856386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.393 [2024-12-05 14:29:39.860359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.860495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.860516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.864701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.864858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.864879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.869003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.869174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.869195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.873329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.873413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.873434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.877479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.877610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.877631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.881702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.881853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.881875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.885911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.886005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.886025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.890132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.890300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.890321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.894369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.894483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.894503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.898516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.898619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.898639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.902905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.903048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.903068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.907203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.907333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.907353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.911484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.911668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.911689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.915785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.915930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.915959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.920083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.920181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.920202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.924461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.924585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.924605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.928667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.928741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.928760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.932885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.933010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.933031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.937097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.937246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.937266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.941246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.941361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.945340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.945466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.945487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.949689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.949840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.949860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.953908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.954006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.954026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.958051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.958190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.958210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.962190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.962373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.966489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.966630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.966651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.394 [2024-12-05 14:29:39.970699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.394 [2024-12-05 14:29:39.970840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.394 [2024-12-05 14:29:39.970861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:39.975054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:39.975144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:39.975165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:39.979221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:39.979390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:39.979411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:39.983526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:39.983643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:39.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:39.987712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:39.987812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:39.987833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:39.992036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:39.992181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:39.992202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:39.996361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:39.996435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:39.996455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.001258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.001379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.001401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.005920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.006090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.010549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.010684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.010706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.015292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.015458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.015479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.019891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.020063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.020086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.026052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.026174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.026195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.030968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.031145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.031166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.395 [2024-12-05 14:29:40.035584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.395 [2024-12-05 14:29:40.035684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.395 [2024-12-05 14:29:40.035704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.040107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.040204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.040227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.044680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.044771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.044791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.050270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.050436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.050458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.054998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.055166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.055188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.059341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.059484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.059505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.064145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.064239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.064269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.068535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.068694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.068715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.072829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.072995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.073016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.077119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.077249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.077271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.081634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.081733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.081754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.085968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.086096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.086117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.090193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.090414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.090441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.094434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.094535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.094556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.098653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.098780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.098813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.103023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.103164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.103185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.107278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.107390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.107410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.111656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.111825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.111846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.115945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.116073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.116093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.120150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.120272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.120303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.124435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.124560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.124580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.128656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.128771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.128791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.132969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.133115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.133136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.137273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.137383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.137404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.141412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.141512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.141532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.145608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.145772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.657 [2024-12-05 14:29:40.145792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.657 [2024-12-05 14:29:40.149713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.657 [2024-12-05 14:29:40.149790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.149823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.153903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.154007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.154028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.158168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.158299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.158320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.162494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.162593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.162613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.166849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.166975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.166995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.171217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.171360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.175441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.175556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.175576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.179687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.179893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.179914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.184029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.184131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.184152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.188355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.188469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.188490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.192544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.192677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.192697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.196771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.196936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.196957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.201041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.201181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.201212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.205291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.205408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.205428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.209611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.209718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.209738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.213859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.214001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.214022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.218134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.218271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.218292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.222408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.222572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.222593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.226650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.226764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.226784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.230880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.230995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.231015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.235063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.235188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.235208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.239290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.239380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.239400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.243482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.243658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.247934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.248093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.248113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.252177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.252283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.256361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.256482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.256502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.260557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.260681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.658 [2024-12-05 14:29:40.260701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.658 [2024-12-05 14:29:40.264787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.658 [2024-12-05 14:29:40.264944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.264965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.269100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.269261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.269280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.273356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.273461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.273481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.277603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.277719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.277739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.281911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.282051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.282072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.286105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.286213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.286232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.290284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.290392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.290413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.294478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.294610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.294630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.659 [2024-12-05 14:29:40.298859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.659 [2024-12-05 14:29:40.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.659 [2024-12-05 14:29:40.299014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.303112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.303261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.303281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.307376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.307519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.307540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.311627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.311725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.311745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.316020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.316196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.316217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.320306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.320439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.320458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.324677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.324850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.324870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.328878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.329034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.329054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.333083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.333219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.333239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.337434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.337568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.337588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.341712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.341821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.341841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.345940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.346044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.346064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.350222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.350380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.350401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.354394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.354494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.354514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.358607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.358734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.358755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.362822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.920 [2024-12-05 14:29:40.362925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.920 [2024-12-05 14:29:40.362945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.920 [2024-12-05 14:29:40.366996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.367080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.367100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.371260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.371419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.371439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.375569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.375662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.375682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.379865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.380009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.380029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.384158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.384313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.388335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.388411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.388432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.392757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.392949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.392970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.397081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.397200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.397233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.401304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.401464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.401484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.405521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.405665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.409669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.409746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.409766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.413907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.414061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.414081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.418148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.418310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.418330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.422364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.422476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.422496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.426636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.426769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.426790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.430909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.431013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.431033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.435101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.435220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.435241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.439273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.439414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.439433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.444077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.444214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.444234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.448379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.448504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.448524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.453003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.453145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.453166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.457524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.457623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.457643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.462199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.462365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.462385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.466826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.466971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.471626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.471756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.471776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.476274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.476405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.476425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.480792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.921 [2024-12-05 14:29:40.480916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.921 [2024-12-05 14:29:40.480937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.921 [2024-12-05 14:29:40.485302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.485441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.485461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.489894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.490039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.490060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.494245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.494405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.494425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.498554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.498688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.498709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.502922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.503062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.503082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.507089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.507180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.507200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.511462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.511610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.511631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.515672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.515848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.515868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.519994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.520100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.520121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.524291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.524416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.524436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.528498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.528611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.528631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.532721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.532881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.532903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.536985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.537086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.537106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.541089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.541198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.541218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.545234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.545385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.545405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.549484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.549599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.549619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.553580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.553708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.553729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.557887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.558020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.558041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.922 [2024-12-05 14:29:40.562302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:34.922 [2024-12-05 14:29:40.562470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.922 [2024-12-05 14:29:40.562490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.566624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.566817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.566838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.570944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.571052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.571073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.575207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.575326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.579551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.579704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.579724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.583853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.583927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.583947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.588118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.588252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.588273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.592547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.592693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.592714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.596845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.596922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.596942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.601030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.601174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.601195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.605202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.605312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.605332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.609270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.609349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.609369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.613699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.613858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.613880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.618116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.618290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.618318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.622906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.623061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.623081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.627755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.627927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.627948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.632415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.632506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.632526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.637034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.637206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.637227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.641474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.641631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.641651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.275 [2024-12-05 14:29:40.645844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.275 [2024-12-05 14:29:40.645971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.275 [2024-12-05 14:29:40.645991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.650169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.650316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.650336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.654520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.654655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.654675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.659082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.659221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.659241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.663338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.663481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.663501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.667656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.667768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.667789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.671922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.672124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.672145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.676432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.676551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.676582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.680883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.680993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.681014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.685227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.685390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.685411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.689477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.689610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.693735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.693896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.693917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.698197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.698338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.698358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.702460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.702586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.702605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.706791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.706945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.706967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.711102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.711240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.715539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.715629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.715649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.719738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.719932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.720006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.724120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.724230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.724250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.728514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.728656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.728677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.732901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.733050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.733071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.737464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.737560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.737581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.741936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.742096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.742116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.746247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.746372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.746393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.750593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.750727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.750747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.755161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.755312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.755333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.759452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.759548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.759568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.763779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.276 [2024-12-05 14:29:40.763928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.276 [2024-12-05 14:29:40.763949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.276 [2024-12-05 14:29:40.768267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.768430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.768451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.772754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.772925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.772947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.777121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.777285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.777306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.781496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.781626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.781646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.785867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.785997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.786017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.790391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.790530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.790551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.794618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.794772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.794792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.798966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.799099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.799119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.803333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.803490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.803519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.807662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.807779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.807798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.812040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.812209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.812229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.816531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.816708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.816728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.820939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.821062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.821083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.825675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.825838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.825864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.830059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.830230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.830251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.834301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.834406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.834428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.838619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.838734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.838755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.843054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.843150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.843172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.847264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.847407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.847428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.851760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.851927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.851950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.856065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.856204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.856226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.860512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.860652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.860673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.864745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.864885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.864907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.869076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.869194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.869215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.873394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.873561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.873581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.877659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.877764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.877785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.881955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.277 [2024-12-05 14:29:40.882137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.277 [2024-12-05 14:29:40.882158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.277 [2024-12-05 14:29:40.886204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.886362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.886383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.890487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.890621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.890643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.894762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.894919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.894941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.899040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.899130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.899166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.903325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.903438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.903458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.907560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.907677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.907698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.911769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.911911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.911932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.278 [2024-12-05 14:29:40.916195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.278 [2024-12-05 14:29:40.916348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.278 [2024-12-05 14:29:40.916371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.920500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.920644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.920667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.924685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.924814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.924838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.929059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.929246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.933349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.933429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.933450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.937541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.937716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.937737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.941862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.942005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.942025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.946091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.946186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.946206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.950335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.950469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.950489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.954567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.954643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.958842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.958945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.958965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.963039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.963167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.963187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.967222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.967316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.967337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.971468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.971628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.971649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.975637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.975770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.975791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.979748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.979853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.979874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.984006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.984147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.984168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.988320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.988436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.554 [2024-12-05 14:29:40.988457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.554 [2024-12-05 14:29:40.992461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.554 [2024-12-05 14:29:40.992567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:40.992589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:40.996762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:40.996920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:40.996940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.000979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.001096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.001116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.005303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.005465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.005485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.009529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.009779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.009799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.013949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.014030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.014050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.018206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.018335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.018355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.022436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.022550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.022570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.026670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.026787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.026820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.030929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.031145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.031172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.035083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.035179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.035199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.039278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.039445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.039465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.043557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.043662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.043683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.047721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.047904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.047937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.052089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.052268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.052290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.056343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.056422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.056443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.060605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.060711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.060732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.064864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.065017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.065037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.069193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.069271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.069291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.073464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.073770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.073791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.077800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.077993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.078013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.082138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.082259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.082280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.086416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.086561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.086581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.555 [2024-12-05 14:29:41.090663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49280) with pdu=0x2000190fef90 00:23:35.555 [2024-12-05 14:29:41.090753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.555 [2024-12-05 14:29:41.090773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.555 00:23:35.555 Latency(us) 00:23:35.555 [2024-12-05T14:29:41.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.555 [2024-12-05T14:29:41.203Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:35.555 nvme0n1 : 2.00 7162.01 895.25 0.00 0.00 2229.49 1802.24 5838.66 00:23:35.555 [2024-12-05T14:29:41.203Z] =================================================================================================================== 00:23:35.555 [2024-12-05T14:29:41.203Z] Total : 7162.01 895.25 0.00 0.00 2229.49 1802.24 5838.66 00:23:35.555 0 00:23:35.555 14:29:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:35.555 14:29:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:35.555 14:29:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:35.555 | .driver_specific 00:23:35.555 | .nvme_error 00:23:35.555 | .status_code 00:23:35.555 | .command_transient_transport_error' 00:23:35.555 14:29:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:35.814 14:29:41 -- host/digest.sh@71 -- # (( 462 > 0 )) 00:23:35.814 14:29:41 -- host/digest.sh@73 -- # killprocess 98093 00:23:35.814 14:29:41 -- common/autotest_common.sh@936 -- # '[' -z 98093 ']' 00:23:35.814 14:29:41 -- common/autotest_common.sh@940 -- # kill -0 98093 00:23:35.814 14:29:41 -- common/autotest_common.sh@941 -- # uname 00:23:35.814 14:29:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:35.814 14:29:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98093 00:23:35.814 14:29:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:35.814 14:29:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:35.814 killing process with pid 98093 00:23:35.814 14:29:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98093' 00:23:35.814 Received shutdown signal, test time was about 2.000000 seconds 00:23:35.814 00:23:35.814 Latency(us) 00:23:35.814 [2024-12-05T14:29:41.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.814 [2024-12-05T14:29:41.462Z] =================================================================================================================== 00:23:35.814 [2024-12-05T14:29:41.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.814 14:29:41 -- common/autotest_common.sh@955 -- # kill 98093 00:23:35.814 14:29:41 -- common/autotest_common.sh@960 -- # wait 98093 00:23:36.071 14:29:41 -- host/digest.sh@115 -- # killprocess 97782 00:23:36.071 14:29:41 -- common/autotest_common.sh@936 -- # '[' -z 97782 ']' 00:23:36.071 14:29:41 -- common/autotest_common.sh@940 -- # kill -0 97782 00:23:36.071 14:29:41 -- common/autotest_common.sh@941 -- # uname 00:23:36.071 14:29:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.071 14:29:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97782 00:23:36.071 14:29:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:36.071 14:29:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:36.071 killing process with pid 97782 00:23:36.071 14:29:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97782' 00:23:36.071 14:29:41 -- common/autotest_common.sh@955 -- # kill 97782 00:23:36.071 14:29:41 -- common/autotest_common.sh@960 -- # wait 97782 00:23:36.327 00:23:36.327 real 0m18.465s 00:23:36.327 user 0m33.823s 00:23:36.327 sys 0m5.633s 00:23:36.327 14:29:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:36.327 14:29:41 -- common/autotest_common.sh@10 -- # set +x 00:23:36.327 ************************************ 00:23:36.327 END TEST nvmf_digest_error 00:23:36.327 ************************************ 00:23:36.327 14:29:41 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:36.327 14:29:41 -- host/digest.sh@139 -- # nvmftestfini 00:23:36.328 14:29:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:36.328 14:29:41 -- nvmf/common.sh@116 -- # sync 00:23:36.328 14:29:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:36.328 14:29:41 -- nvmf/common.sh@119 -- # set +e 00:23:36.328 14:29:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:36.328 14:29:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:36.328 rmmod nvme_tcp 00:23:36.328 rmmod nvme_fabrics 00:23:36.584 rmmod nvme_keyring 00:23:36.584 14:29:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:36.584 14:29:42 -- nvmf/common.sh@123 -- # set -e 00:23:36.584 14:29:42 -- nvmf/common.sh@124 -- # return 0 00:23:36.584 14:29:42 -- nvmf/common.sh@477 -- # '[' -n 97782 ']' 00:23:36.584 14:29:42 -- nvmf/common.sh@478 -- # killprocess 97782 00:23:36.584 14:29:42 -- common/autotest_common.sh@936 -- # '[' -z 97782 ']' 00:23:36.584 14:29:42 -- common/autotest_common.sh@940 -- # kill -0 97782 00:23:36.584 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97782) - No such process 00:23:36.584 Process with pid 97782 is not found 00:23:36.584 14:29:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97782 is not found' 00:23:36.585 14:29:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:36.585 14:29:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:36.585 14:29:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:36.585 14:29:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.585 14:29:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:36.585 14:29:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.585 14:29:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.585 14:29:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.585 14:29:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:36.585 00:23:36.585 real 0m36.549s 00:23:36.585 user 1m5.698s 00:23:36.585 sys 0m11.550s 00:23:36.585 14:29:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:36.585 14:29:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.585 ************************************ 00:23:36.585 END TEST nvmf_digest 00:23:36.585 ************************************ 00:23:36.585 14:29:42 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:36.585 14:29:42 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:36.585 14:29:42 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:36.585 14:29:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:36.585 14:29:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:36.585 14:29:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.585 ************************************ 00:23:36.585 START TEST nvmf_mdns_discovery 00:23:36.585 ************************************ 00:23:36.585 14:29:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:36.585 * Looking for test storage... 00:23:36.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:36.585 14:29:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:36.585 14:29:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:36.585 14:29:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:36.843 14:29:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:36.843 14:29:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:36.843 14:29:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:36.843 14:29:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:36.843 14:29:42 -- scripts/common.sh@335 -- # IFS=.-: 00:23:36.843 14:29:42 -- scripts/common.sh@335 -- # read -ra ver1 00:23:36.843 14:29:42 -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.843 14:29:42 -- scripts/common.sh@336 -- # read -ra ver2 00:23:36.843 14:29:42 -- scripts/common.sh@337 -- # local 'op=<' 00:23:36.843 14:29:42 -- scripts/common.sh@339 -- # ver1_l=2 00:23:36.843 14:29:42 -- scripts/common.sh@340 -- # ver2_l=1 00:23:36.843 14:29:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:36.843 14:29:42 -- scripts/common.sh@343 -- # case "$op" in 00:23:36.843 14:29:42 -- scripts/common.sh@344 -- # : 1 00:23:36.843 14:29:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:36.843 14:29:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.843 14:29:42 -- scripts/common.sh@364 -- # decimal 1 00:23:36.843 14:29:42 -- scripts/common.sh@352 -- # local d=1 00:23:36.843 14:29:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.843 14:29:42 -- scripts/common.sh@354 -- # echo 1 00:23:36.843 14:29:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:36.843 14:29:42 -- scripts/common.sh@365 -- # decimal 2 00:23:36.843 14:29:42 -- scripts/common.sh@352 -- # local d=2 00:23:36.843 14:29:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.843 14:29:42 -- scripts/common.sh@354 -- # echo 2 00:23:36.843 14:29:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:36.843 14:29:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:36.843 14:29:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:36.843 14:29:42 -- scripts/common.sh@367 -- # return 0 00:23:36.843 14:29:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.843 14:29:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:36.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.843 --rc genhtml_branch_coverage=1 00:23:36.843 --rc genhtml_function_coverage=1 00:23:36.843 --rc genhtml_legend=1 00:23:36.843 --rc geninfo_all_blocks=1 00:23:36.843 --rc geninfo_unexecuted_blocks=1 00:23:36.843 00:23:36.843 ' 00:23:36.843 14:29:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:36.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.843 --rc genhtml_branch_coverage=1 00:23:36.843 --rc genhtml_function_coverage=1 00:23:36.843 --rc genhtml_legend=1 00:23:36.843 --rc geninfo_all_blocks=1 00:23:36.843 --rc geninfo_unexecuted_blocks=1 00:23:36.843 00:23:36.843 ' 00:23:36.843 14:29:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:36.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.843 --rc genhtml_branch_coverage=1 00:23:36.843 --rc genhtml_function_coverage=1 00:23:36.843 --rc genhtml_legend=1 00:23:36.843 --rc geninfo_all_blocks=1 00:23:36.843 --rc geninfo_unexecuted_blocks=1 00:23:36.843 00:23:36.843 ' 00:23:36.843 14:29:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:36.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.843 --rc genhtml_branch_coverage=1 00:23:36.843 --rc genhtml_function_coverage=1 00:23:36.843 --rc genhtml_legend=1 00:23:36.843 --rc geninfo_all_blocks=1 00:23:36.843 --rc geninfo_unexecuted_blocks=1 00:23:36.843 00:23:36.843 ' 00:23:36.843 14:29:42 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:36.843 14:29:42 -- nvmf/common.sh@7 -- # uname -s 00:23:36.843 14:29:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.843 14:29:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.843 14:29:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.843 14:29:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.843 14:29:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.843 14:29:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.843 14:29:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.843 14:29:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.843 14:29:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.843 14:29:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.843 14:29:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:23:36.843 14:29:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:23:36.843 14:29:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.844 14:29:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.844 14:29:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:36.844 14:29:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.844 14:29:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.844 14:29:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.844 14:29:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.844 14:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.844 14:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.844 14:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.844 14:29:42 -- paths/export.sh@5 -- # export PATH 00:23:36.844 14:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.844 14:29:42 -- nvmf/common.sh@46 -- # : 0 00:23:36.844 14:29:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:36.844 14:29:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:36.844 14:29:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:36.844 14:29:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.844 14:29:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.844 14:29:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:36.844 14:29:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:36.844 14:29:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:36.844 14:29:42 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:36.844 14:29:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:36.844 14:29:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.844 14:29:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:36.844 14:29:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:36.844 14:29:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:36.844 14:29:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.844 14:29:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.844 14:29:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.844 14:29:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:36.844 14:29:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:36.844 14:29:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:36.844 14:29:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:36.844 14:29:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:36.844 14:29:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:36.844 14:29:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.844 14:29:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.844 14:29:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:36.844 14:29:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:36.844 14:29:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:36.844 14:29:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:36.844 14:29:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:36.844 14:29:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.844 14:29:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:36.844 14:29:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:36.844 14:29:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:36.844 14:29:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:36.844 14:29:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:36.844 14:29:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:36.844 Cannot find device "nvmf_tgt_br" 00:23:36.844 14:29:42 -- nvmf/common.sh@154 -- # true 00:23:36.844 14:29:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.844 Cannot find device "nvmf_tgt_br2" 00:23:36.844 14:29:42 -- nvmf/common.sh@155 -- # true 00:23:36.844 14:29:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:36.844 14:29:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:36.844 Cannot find device "nvmf_tgt_br" 00:23:36.844 14:29:42 -- nvmf/common.sh@157 -- # true 00:23:36.844 14:29:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:36.844 Cannot find device "nvmf_tgt_br2" 00:23:36.844 14:29:42 -- nvmf/common.sh@158 -- # true 00:23:36.844 14:29:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:36.844 14:29:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:36.844 14:29:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.844 14:29:42 -- nvmf/common.sh@161 -- # true 00:23:36.844 14:29:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.844 14:29:42 -- nvmf/common.sh@162 -- # true 00:23:36.844 14:29:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:36.844 14:29:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:36.844 14:29:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:36.844 14:29:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:36.844 14:29:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.146 14:29:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.146 14:29:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.146 14:29:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:37.146 14:29:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:37.146 14:29:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:37.146 14:29:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:37.146 14:29:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:37.146 14:29:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:37.146 14:29:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.146 14:29:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.146 14:29:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.146 14:29:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:37.146 14:29:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:37.146 14:29:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.146 14:29:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.146 14:29:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.146 14:29:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.146 14:29:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.146 14:29:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:37.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:23:37.146 00:23:37.146 --- 10.0.0.2 ping statistics --- 00:23:37.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.146 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:23:37.146 14:29:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:37.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:23:37.146 00:23:37.146 --- 10.0.0.3 ping statistics --- 00:23:37.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.146 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:37.146 14:29:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:37.146 00:23:37.146 --- 10.0.0.1 ping statistics --- 00:23:37.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.146 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:37.146 14:29:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.146 14:29:42 -- nvmf/common.sh@421 -- # return 0 00:23:37.146 14:29:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:37.146 14:29:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.146 14:29:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:37.146 14:29:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:37.146 14:29:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.146 14:29:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:37.146 14:29:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:37.146 14:29:42 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:37.146 14:29:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:37.146 14:29:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:37.146 14:29:42 -- common/autotest_common.sh@10 -- # set +x 00:23:37.146 14:29:42 -- nvmf/common.sh@469 -- # nvmfpid=98398 00:23:37.146 14:29:42 -- nvmf/common.sh@470 -- # waitforlisten 98398 00:23:37.146 14:29:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:37.146 14:29:42 -- common/autotest_common.sh@829 -- # '[' -z 98398 ']' 00:23:37.146 14:29:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.146 14:29:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.146 14:29:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.146 14:29:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.146 14:29:42 -- common/autotest_common.sh@10 -- # set +x 00:23:37.146 [2024-12-05 14:29:42.718680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.146 [2024-12-05 14:29:42.719049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.405 [2024-12-05 14:29:42.867945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.405 [2024-12-05 14:29:42.937871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:37.405 [2024-12-05 14:29:42.938053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.405 [2024-12-05 14:29:42.938070] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.405 [2024-12-05 14:29:42.938081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.405 [2024-12-05 14:29:42.938114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.405 14:29:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.405 14:29:42 -- common/autotest_common.sh@862 -- # return 0 00:23:37.405 14:29:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:37.405 14:29:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:37.405 14:29:42 -- common/autotest_common.sh@10 -- # set +x 00:23:37.405 14:29:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.405 14:29:43 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:37.405 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.405 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.405 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.405 14:29:43 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:37.405 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.405 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 [2024-12-05 14:29:43.157825] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 [2024-12-05 14:29:43.165923] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 null0 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 null1 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 null2 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 null3 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:37.664 14:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:37.664 14:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@47 -- # hostpid=98433 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:37.664 14:29:43 -- host/mdns_discovery.sh@48 -- # waitforlisten 98433 /tmp/host.sock 00:23:37.664 14:29:43 -- common/autotest_common.sh@829 -- # '[' -z 98433 ']' 00:23:37.664 14:29:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:37.664 14:29:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.664 14:29:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:37.664 14:29:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.664 14:29:43 -- common/autotest_common.sh@10 -- # set +x 00:23:37.664 [2024-12-05 14:29:43.270554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.664 [2024-12-05 14:29:43.270641] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98433 ] 00:23:37.922 [2024-12-05 14:29:43.412401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.922 [2024-12-05 14:29:43.478301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:37.922 [2024-12-05 14:29:43.478498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.488 14:29:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.488 14:29:44 -- common/autotest_common.sh@862 -- # return 0 00:23:38.488 14:29:44 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:38.488 14:29:44 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:38.488 14:29:44 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:38.746 14:29:44 -- host/mdns_discovery.sh@57 -- # avahipid=98459 00:23:38.746 14:29:44 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:38.747 14:29:44 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:38.747 14:29:44 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:38.747 Process 1052 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:38.747 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:38.747 Successfully dropped root privileges. 00:23:38.747 avahi-daemon 0.8 starting up. 00:23:38.747 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:38.747 Successfully called chroot(). 00:23:38.747 Successfully dropped remaining capabilities. 00:23:38.747 No service file found in /etc/avahi/services. 00:23:39.680 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:39.680 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:39.680 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:39.680 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:39.680 Network interface enumeration completed. 00:23:39.680 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:39.680 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:39.680 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:39.680 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:39.680 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3440583356. 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:39.680 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.680 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.680 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:39.680 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.680 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.680 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:39.680 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@68 -- # sort 00:23:39.680 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@68 -- # xargs 00:23:39.680 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:39.680 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@64 -- # sort 00:23:39.680 14:29:45 -- host/mdns_discovery.sh@64 -- # xargs 00:23:39.680 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.680 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.938 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.938 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.938 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.938 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@68 -- # sort 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@68 -- # xargs 00:23:39.938 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@64 -- # sort 00:23:39.938 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@64 -- # xargs 00:23:39.938 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:39.938 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.938 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:39.938 14:29:45 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.938 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.939 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@68 -- # sort 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@68 -- # xargs 00:23:39.939 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.939 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.939 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@64 -- # sort 00:23:39.939 14:29:45 -- host/mdns_discovery.sh@64 -- # xargs 00:23:39.939 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.939 [2024-12-05 14:29:45.552764] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 [2024-12-05 14:29:45.594438] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 [2024-12-05 14:29:45.634372] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:40.197 14:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.197 14:29:45 -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 [2024-12-05 14:29:45.642365] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.197 14:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98520 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:40.197 14:29:45 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:41.129 [2024-12-05 14:29:46.452766] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:41.129 Established under name 'CDC' 00:23:41.387 [2024-12-05 14:29:46.852774] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:41.387 [2024-12-05 14:29:46.852794] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:41.387 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:41.387 cookie is 0 00:23:41.387 is_local: 1 00:23:41.387 our_own: 0 00:23:41.387 wide_area: 0 00:23:41.387 multicast: 1 00:23:41.387 cached: 1 00:23:41.387 [2024-12-05 14:29:46.952772] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:41.387 [2024-12-05 14:29:46.952794] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:41.387 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:41.387 cookie is 0 00:23:41.387 is_local: 1 00:23:41.387 our_own: 0 00:23:41.387 wide_area: 0 00:23:41.387 multicast: 1 00:23:41.387 cached: 1 00:23:42.319 [2024-12-05 14:29:47.861881] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:42.319 [2024-12-05 14:29:47.861906] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:42.319 [2024-12-05 14:29:47.861922] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:42.319 [2024-12-05 14:29:47.948007] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:42.319 [2024-12-05 14:29:47.961553] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:42.319 [2024-12-05 14:29:47.961575] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:42.319 [2024-12-05 14:29:47.961618] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:42.576 [2024-12-05 14:29:48.010020] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:42.576 [2024-12-05 14:29:48.010063] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:42.576 [2024-12-05 14:29:48.047207] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:42.576 [2024-12-05 14:29:48.101704] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:42.576 [2024-12-05 14:29:48.101731] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:45.117 14:29:50 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:45.118 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@80 -- # xargs 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@80 -- # sort 00:23:45.118 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.118 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@76 -- # sort 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:45.118 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.118 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@76 -- # xargs 00:23:45.118 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:45.118 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@68 -- # xargs 00:23:45.118 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.118 14:29:50 -- host/mdns_discovery.sh@68 -- # sort 00:23:45.377 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.377 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.377 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@64 -- # xargs 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@64 -- # sort 00:23:45.377 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:45.377 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.377 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # xargs 00:23:45.377 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@72 -- # xargs 00:23:45.377 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.377 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.377 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:45.377 14:29:50 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:45.377 14:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.377 14:29:50 -- common/autotest_common.sh@10 -- # set +x 00:23:45.377 14:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.635 14:29:51 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:45.635 14:29:51 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:45.635 14:29:51 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:45.635 14:29:51 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:45.635 14:29:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.635 14:29:51 -- common/autotest_common.sh@10 -- # set +x 00:23:45.635 14:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.635 14:29:51 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:45.635 14:29:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.635 14:29:51 -- common/autotest_common.sh@10 -- # set +x 00:23:45.635 14:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.635 14:29:51 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:46.569 14:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.569 14:29:52 -- common/autotest_common.sh@10 -- # set +x 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@64 -- # sort 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@64 -- # xargs 00:23:46.569 14:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:46.569 14:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.569 14:29:52 -- common/autotest_common.sh@10 -- # set +x 00:23:46.569 14:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:46.569 14:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.569 14:29:52 -- common/autotest_common.sh@10 -- # set +x 00:23:46.569 [2024-12-05 14:29:52.160830] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.569 [2024-12-05 14:29:52.160978] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:46.569 [2024-12-05 14:29:52.161004] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.569 [2024-12-05 14:29:52.161051] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:46.569 [2024-12-05 14:29:52.161070] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:46.569 14:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:46.569 14:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.569 14:29:52 -- common/autotest_common.sh@10 -- # set +x 00:23:46.569 [2024-12-05 14:29:52.168785] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:46.569 [2024-12-05 14:29:52.169995] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:46.569 [2024-12-05 14:29:52.170046] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:46.569 14:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.569 14:29:52 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:46.828 [2024-12-05 14:29:52.301094] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:46.828 [2024-12-05 14:29:52.301235] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:46.828 [2024-12-05 14:29:52.364317] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:46.828 [2024-12-05 14:29:52.364336] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.828 [2024-12-05 14:29:52.364342] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:46.828 [2024-12-05 14:29:52.364356] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.828 [2024-12-05 14:29:52.364390] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:46.828 [2024-12-05 14:29:52.364397] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:46.828 [2024-12-05 14:29:52.364402] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:46.829 [2024-12-05 14:29:52.364412] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:46.829 [2024-12-05 14:29:52.410185] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.829 [2024-12-05 14:29:52.410202] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:46.829 [2024-12-05 14:29:52.410236] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:46.829 [2024-12-05 14:29:52.410243] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:47.766 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.766 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@68 -- # sort 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@68 -- # xargs 00:23:47.766 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.766 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.766 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@64 -- # sort 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@64 -- # xargs 00:23:47.766 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:47.766 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.766 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # xargs 00:23:47.766 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:47.766 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.766 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # xargs 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:47.766 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:47.766 14:29:53 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:47.766 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.766 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:47.766 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.028 14:29:53 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:48.028 14:29:53 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:48.028 14:29:53 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:48.028 14:29:53 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:48.028 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.028 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:48.029 [2024-12-05 14:29:53.453909] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:48.029 [2024-12-05 14:29:53.453937] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:48.029 [2024-12-05 14:29:53.453968] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:48.029 [2024-12-05 14:29:53.453981] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:48.029 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.029 14:29:53 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:48.029 14:29:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.029 14:29:53 -- common/autotest_common.sh@10 -- # set +x 00:23:48.029 [2024-12-05 14:29:53.461550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.461724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.461757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.461766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.461776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.461784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.461793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.461801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.461826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.029 [2024-12-05 14:29:53.461964] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:48.029 [2024-12-05 14:29:53.462012] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:48.029 14:29:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.029 14:29:53 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:48.029 [2024-12-05 14:29:53.468542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.468571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.468599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.468608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.468617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.468625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.468634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.029 [2024-12-05 14:29:53.468641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.029 [2024-12-05 14:29:53.468649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.029 [2024-12-05 14:29:53.471513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.029 [2024-12-05 14:29:53.478510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.029 [2024-12-05 14:29:53.481539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.029 [2024-12-05 14:29:53.481650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.481725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.481740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.029 [2024-12-05 14:29:53.481749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.029 [2024-12-05 14:29:53.481764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.029 [2024-12-05 14:29:53.481778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.029 [2024-12-05 14:29:53.481786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.029 [2024-12-05 14:29:53.481795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.029 [2024-12-05 14:29:53.481809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.029 [2024-12-05 14:29:53.488519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.029 [2024-12-05 14:29:53.488609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.488680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.488694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.029 [2024-12-05 14:29:53.488704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.029 [2024-12-05 14:29:53.488718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.029 [2024-12-05 14:29:53.488730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.029 [2024-12-05 14:29:53.488737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.029 [2024-12-05 14:29:53.488745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.029 [2024-12-05 14:29:53.488758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.029 [2024-12-05 14:29:53.491587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.029 [2024-12-05 14:29:53.491671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.491710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.491723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.029 [2024-12-05 14:29:53.491731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.029 [2024-12-05 14:29:53.491745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.029 [2024-12-05 14:29:53.491772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.029 [2024-12-05 14:29:53.491795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.029 [2024-12-05 14:29:53.491803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.029 [2024-12-05 14:29:53.491816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.029 [2024-12-05 14:29:53.498581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.029 [2024-12-05 14:29:53.498684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.498723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-12-05 14:29:53.498737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.029 [2024-12-05 14:29:53.498745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.029 [2024-12-05 14:29:53.498759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.029 [2024-12-05 14:29:53.498772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.029 [2024-12-05 14:29:53.498779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.029 [2024-12-05 14:29:53.498802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.030 [2024-12-05 14:29:53.498816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.501630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.030 [2024-12-05 14:29:53.501714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.501753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.501766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.030 [2024-12-05 14:29:53.501774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.501787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.501799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.501807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.030 [2024-12-05 14:29:53.501814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.030 [2024-12-05 14:29:53.501873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.508660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.030 [2024-12-05 14:29:53.508752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.508792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.508805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.030 [2024-12-05 14:29:53.508814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.508840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.508853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.508860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.030 [2024-12-05 14:29:53.508868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.030 [2024-12-05 14:29:53.508880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.511672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.030 [2024-12-05 14:29:53.511773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.511812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.511837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.030 [2024-12-05 14:29:53.511847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.511861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.511873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.511880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.030 [2024-12-05 14:29:53.511887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.030 [2024-12-05 14:29:53.511900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.518708] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.030 [2024-12-05 14:29:53.518796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.518847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.518862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.030 [2024-12-05 14:29:53.518871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.518885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.518897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.518904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.030 [2024-12-05 14:29:53.518911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.030 [2024-12-05 14:29:53.518923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.521730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.030 [2024-12-05 14:29:53.521840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.521882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.521895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.030 [2024-12-05 14:29:53.521904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.521918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.521930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.521937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.030 [2024-12-05 14:29:53.521944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.030 [2024-12-05 14:29:53.521956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.528755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.030 [2024-12-05 14:29:53.528867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.528907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.528921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.030 [2024-12-05 14:29:53.528939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.528971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.529000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.529007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.030 [2024-12-05 14:29:53.529015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.030 [2024-12-05 14:29:53.529045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.030 [2024-12-05 14:29:53.531803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.030 [2024-12-05 14:29:53.531900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.531939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.030 [2024-12-05 14:29:53.531980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.030 [2024-12-05 14:29:53.532005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.030 [2024-12-05 14:29:53.532020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.030 [2024-12-05 14:29:53.532047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.030 [2024-12-05 14:29:53.532056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.532064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.031 [2024-12-05 14:29:53.532077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.538799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.031 [2024-12-05 14:29:53.538896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.538934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.538948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.031 [2024-12-05 14:29:53.538956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.538969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.538981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.539004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.539011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.031 [2024-12-05 14:29:53.539039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.541871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.031 [2024-12-05 14:29:53.541958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.541996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.542009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.031 [2024-12-05 14:29:53.542017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.542063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.542090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.542099] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.542107] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.031 [2024-12-05 14:29:53.542119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.548870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.031 [2024-12-05 14:29:53.548961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.549001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.549014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.031 [2024-12-05 14:29:53.549022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.549036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.549048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.549055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.549078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.031 [2024-12-05 14:29:53.549106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.551934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.031 [2024-12-05 14:29:53.552049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.552091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.552104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.031 [2024-12-05 14:29:53.552113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.552143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.552202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.552215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.552223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.031 [2024-12-05 14:29:53.552236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.558920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.031 [2024-12-05 14:29:53.558996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.559035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.559048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.031 [2024-12-05 14:29:53.559056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.559101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.559114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.559121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.559129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.031 [2024-12-05 14:29:53.559141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.562020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.031 [2024-12-05 14:29:53.562107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.562146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.562159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.031 [2024-12-05 14:29:53.562167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.562180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.562222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.562247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.031 [2024-12-05 14:29:53.562255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.031 [2024-12-05 14:29:53.562267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.031 [2024-12-05 14:29:53.568968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.031 [2024-12-05 14:29:53.569054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.569093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.031 [2024-12-05 14:29:53.569106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.031 [2024-12-05 14:29:53.569114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.031 [2024-12-05 14:29:53.569139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.031 [2024-12-05 14:29:53.569150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.031 [2024-12-05 14:29:53.569157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.032 [2024-12-05 14:29:53.569164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.032 [2024-12-05 14:29:53.569176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.032 [2024-12-05 14:29:53.572067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.032 [2024-12-05 14:29:53.572150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.572190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.572203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.032 [2024-12-05 14:29:53.572212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.032 [2024-12-05 14:29:53.572225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.032 [2024-12-05 14:29:53.572266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.032 [2024-12-05 14:29:53.572275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.032 [2024-12-05 14:29:53.572282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.032 [2024-12-05 14:29:53.572310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.032 [2024-12-05 14:29:53.579012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.032 [2024-12-05 14:29:53.579100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.579138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.579151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.032 [2024-12-05 14:29:53.579160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.032 [2024-12-05 14:29:53.579173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.032 [2024-12-05 14:29:53.579184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.032 [2024-12-05 14:29:53.579207] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.032 [2024-12-05 14:29:53.579230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.032 [2024-12-05 14:29:53.579243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.032 [2024-12-05 14:29:53.582124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.032 [2024-12-05 14:29:53.582206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.582245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.582258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.032 [2024-12-05 14:29:53.582266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.032 [2024-12-05 14:29:53.582295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.032 [2024-12-05 14:29:53.582338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.032 [2024-12-05 14:29:53.582347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.032 [2024-12-05 14:29:53.582355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.032 [2024-12-05 14:29:53.582367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.032 [2024-12-05 14:29:53.589076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:48.032 [2024-12-05 14:29:53.589162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.589200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.589213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706760 with addr=10.0.0.3, port=4420 00:23:48.032 [2024-12-05 14:29:53.589221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706760 is same with the state(5) to be set 00:23:48.032 [2024-12-05 14:29:53.589234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706760 (9): Bad file descriptor 00:23:48.032 [2024-12-05 14:29:53.589246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:48.032 [2024-12-05 14:29:53.589253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:48.032 [2024-12-05 14:29:53.589261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:48.032 [2024-12-05 14:29:53.589272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.032 [2024-12-05 14:29:53.592182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.032 [2024-12-05 14:29:53.592255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.592310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.032 [2024-12-05 14:29:53.592323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171baa0 with addr=10.0.0.2, port=4420 00:23:48.032 [2024-12-05 14:29:53.592331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171baa0 is same with the state(5) to be set 00:23:48.032 [2024-12-05 14:29:53.592360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171baa0 (9): Bad file descriptor 00:23:48.032 [2024-12-05 14:29:53.592411] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:48.032 [2024-12-05 14:29:53.592428] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:48.032 [2024-12-05 14:29:53.592444] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:48.032 [2024-12-05 14:29:53.592473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.032 [2024-12-05 14:29:53.592484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.032 [2024-12-05 14:29:53.592492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.032 [2024-12-05 14:29:53.592507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.032 [2024-12-05 14:29:53.593403] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:48.032 [2024-12-05 14:29:53.593435] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:48.032 [2024-12-05 14:29:53.593450] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:48.291 [2024-12-05 14:29:53.678438] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:48.291 [2024-12-05 14:29:53.679442] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:48.860 14:29:54 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:48.860 14:29:54 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:48.860 14:29:54 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:48.860 14:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.860 14:29:54 -- host/mdns_discovery.sh@68 -- # sort 00:23:48.860 14:29:54 -- common/autotest_common.sh@10 -- # set +x 00:23:48.860 14:29:54 -- host/mdns_discovery.sh@68 -- # xargs 00:23:48.860 14:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:49.120 14:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.120 14:29:54 -- common/autotest_common.sh@10 -- # set +x 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@64 -- # sort 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@64 -- # xargs 00:23:49.120 14:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:49.120 14:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:49.120 14:29:54 -- common/autotest_common.sh@10 -- # set +x 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # xargs 00:23:49.120 14:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:49.120 14:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@72 -- # xargs 00:23:49.120 14:29:54 -- common/autotest_common.sh@10 -- # set +x 00:23:49.120 14:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:49.120 14:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.120 14:29:54 -- common/autotest_common.sh@10 -- # set +x 00:23:49.120 14:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:49.120 14:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.120 14:29:54 -- common/autotest_common.sh@10 -- # set +x 00:23:49.120 14:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.120 14:29:54 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:49.120 [2024-12-05 14:29:54.752784] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:50.509 14:29:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.509 14:29:55 -- common/autotest_common.sh@10 -- # set +x 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@80 -- # sort 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@80 -- # xargs 00:23:50.509 14:29:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:50.509 14:29:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@68 -- # sort 00:23:50.509 14:29:55 -- common/autotest_common.sh@10 -- # set +x 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@68 -- # xargs 00:23:50.509 14:29:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@64 -- # sort 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:50.509 14:29:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.509 14:29:55 -- common/autotest_common.sh@10 -- # set +x 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@64 -- # xargs 00:23:50.509 14:29:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:50.509 14:29:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.509 14:29:55 -- common/autotest_common.sh@10 -- # set +x 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:50.509 14:29:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:50.509 14:29:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.509 14:29:55 -- common/autotest_common.sh@10 -- # set +x 00:23:50.509 14:29:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:50.509 14:29:55 -- common/autotest_common.sh@650 -- # local es=0 00:23:50.509 14:29:55 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:50.509 14:29:55 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:50.509 14:29:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.509 14:29:55 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:50.509 14:29:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.509 14:29:55 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:50.509 14:29:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.509 14:29:55 -- common/autotest_common.sh@10 -- # set +x 00:23:50.509 [2024-12-05 14:29:55.987550] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:50.509 2024/12/05 14:29:55 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:50.509 request: 00:23:50.509 { 00:23:50.509 "method": "bdev_nvme_start_mdns_discovery", 00:23:50.509 "params": { 00:23:50.509 "name": "mdns", 00:23:50.509 "svcname": "_nvme-disc._http", 00:23:50.509 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:50.509 } 00:23:50.509 } 00:23:50.509 Got JSON-RPC error response 00:23:50.509 GoRPCClient: error on JSON-RPC call 00:23:50.509 14:29:55 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:50.509 14:29:55 -- common/autotest_common.sh@653 -- # es=1 00:23:50.509 14:29:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.509 14:29:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.509 14:29:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.509 14:29:55 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:50.767 [2024-12-05 14:29:56.376095] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:51.026 [2024-12-05 14:29:56.476091] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:51.026 [2024-12-05 14:29:56.576098] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:51.026 [2024-12-05 14:29:56.576117] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:51.026 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:51.026 cookie is 0 00:23:51.026 is_local: 1 00:23:51.026 our_own: 0 00:23:51.026 wide_area: 0 00:23:51.026 multicast: 1 00:23:51.026 cached: 1 00:23:51.284 [2024-12-05 14:29:56.676099] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:51.284 [2024-12-05 14:29:56.676122] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:51.284 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:51.284 cookie is 0 00:23:51.284 is_local: 1 00:23:51.284 our_own: 0 00:23:51.284 wide_area: 0 00:23:51.284 multicast: 1 00:23:51.284 cached: 1 00:23:52.220 [2024-12-05 14:29:57.582936] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:52.220 [2024-12-05 14:29:57.582954] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:52.220 [2024-12-05 14:29:57.582969] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:52.220 [2024-12-05 14:29:57.669070] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:52.220 [2024-12-05 14:29:57.682903] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:52.220 [2024-12-05 14:29:57.682924] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:52.220 [2024-12-05 14:29:57.682956] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:52.220 [2024-12-05 14:29:57.732772] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:52.220 [2024-12-05 14:29:57.732800] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:52.220 [2024-12-05 14:29:57.768905] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:52.220 [2024-12-05 14:29:57.827549] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:52.220 [2024-12-05 14:29:57.827574] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:55.506 14:30:00 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:55.506 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.506 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@80 -- # sort 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@80 -- # xargs 00:23:55.506 14:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:55.506 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@76 -- # sort 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@76 -- # xargs 00:23:55.506 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.506 14:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@64 -- # sort 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:55.506 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.506 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.506 14:30:01 -- host/mdns_discovery.sh@64 -- # xargs 00:23:55.765 14:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:55.765 14:30:01 -- common/autotest_common.sh@650 -- # local es=0 00:23:55.765 14:30:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:55.765 14:30:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:55.765 14:30:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.765 14:30:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:55.765 14:30:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.765 14:30:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:55.765 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.765 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.765 [2024-12-05 14:30:01.175023] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:55.765 2024/12/05 14:30:01 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:55.765 request: 00:23:55.765 { 00:23:55.765 "method": "bdev_nvme_start_mdns_discovery", 00:23:55.765 "params": { 00:23:55.765 "name": "cdc", 00:23:55.765 "svcname": "_nvme-disc._tcp", 00:23:55.765 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:55.765 } 00:23:55.765 } 00:23:55.765 Got JSON-RPC error response 00:23:55.765 GoRPCClient: error on JSON-RPC call 00:23:55.765 14:30:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:55.765 14:30:01 -- common/autotest_common.sh@653 -- # es=1 00:23:55.765 14:30:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.765 14:30:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.765 14:30:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:55.765 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.765 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@76 -- # sort 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@76 -- # xargs 00:23:55.765 14:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.765 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.765 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@64 -- # sort 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@64 -- # xargs 00:23:55.765 14:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:55.765 14:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.765 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:55.765 14:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@197 -- # kill 98433 00:23:55.765 14:30:01 -- host/mdns_discovery.sh@200 -- # wait 98433 00:23:55.765 [2024-12-05 14:30:01.404902] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:56.023 14:30:01 -- host/mdns_discovery.sh@201 -- # kill 98520 00:23:56.023 Got SIGTERM, quitting. 00:23:56.023 14:30:01 -- host/mdns_discovery.sh@202 -- # kill 98459 00:23:56.023 14:30:01 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:56.023 14:30:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:56.023 14:30:01 -- nvmf/common.sh@116 -- # sync 00:23:56.023 Got SIGTERM, quitting. 00:23:56.023 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:56.023 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:56.023 avahi-daemon 0.8 exiting. 00:23:56.023 14:30:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:56.023 14:30:01 -- nvmf/common.sh@119 -- # set +e 00:23:56.023 14:30:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:56.023 14:30:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:56.023 rmmod nvme_tcp 00:23:56.023 rmmod nvme_fabrics 00:23:56.023 rmmod nvme_keyring 00:23:56.023 14:30:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:56.023 14:30:01 -- nvmf/common.sh@123 -- # set -e 00:23:56.023 14:30:01 -- nvmf/common.sh@124 -- # return 0 00:23:56.023 14:30:01 -- nvmf/common.sh@477 -- # '[' -n 98398 ']' 00:23:56.023 14:30:01 -- nvmf/common.sh@478 -- # killprocess 98398 00:23:56.023 14:30:01 -- common/autotest_common.sh@936 -- # '[' -z 98398 ']' 00:23:56.023 14:30:01 -- common/autotest_common.sh@940 -- # kill -0 98398 00:23:56.023 14:30:01 -- common/autotest_common.sh@941 -- # uname 00:23:56.023 14:30:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.023 14:30:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98398 00:23:56.023 14:30:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:56.023 killing process with pid 98398 00:23:56.023 14:30:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:56.023 14:30:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98398' 00:23:56.023 14:30:01 -- common/autotest_common.sh@955 -- # kill 98398 00:23:56.281 14:30:01 -- common/autotest_common.sh@960 -- # wait 98398 00:23:56.539 14:30:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:56.539 14:30:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:56.539 14:30:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:56.539 14:30:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.539 14:30:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:56.539 14:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.539 14:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.539 14:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.539 14:30:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:56.539 00:23:56.539 real 0m19.892s 00:23:56.539 user 0m39.129s 00:23:56.539 sys 0m1.916s 00:23:56.539 14:30:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:56.539 ************************************ 00:23:56.539 END TEST nvmf_mdns_discovery 00:23:56.539 ************************************ 00:23:56.539 14:30:01 -- common/autotest_common.sh@10 -- # set +x 00:23:56.539 14:30:02 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:56.539 14:30:02 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:56.539 14:30:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:56.539 14:30:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:56.539 14:30:02 -- common/autotest_common.sh@10 -- # set +x 00:23:56.539 ************************************ 00:23:56.539 START TEST nvmf_multipath 00:23:56.539 ************************************ 00:23:56.539 14:30:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:56.539 * Looking for test storage... 00:23:56.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:56.539 14:30:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:56.539 14:30:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:56.539 14:30:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:56.799 14:30:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:56.799 14:30:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:56.799 14:30:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:56.799 14:30:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:56.799 14:30:02 -- scripts/common.sh@335 -- # IFS=.-: 00:23:56.799 14:30:02 -- scripts/common.sh@335 -- # read -ra ver1 00:23:56.799 14:30:02 -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.799 14:30:02 -- scripts/common.sh@336 -- # read -ra ver2 00:23:56.799 14:30:02 -- scripts/common.sh@337 -- # local 'op=<' 00:23:56.799 14:30:02 -- scripts/common.sh@339 -- # ver1_l=2 00:23:56.799 14:30:02 -- scripts/common.sh@340 -- # ver2_l=1 00:23:56.799 14:30:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:56.799 14:30:02 -- scripts/common.sh@343 -- # case "$op" in 00:23:56.799 14:30:02 -- scripts/common.sh@344 -- # : 1 00:23:56.799 14:30:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:56.799 14:30:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.799 14:30:02 -- scripts/common.sh@364 -- # decimal 1 00:23:56.799 14:30:02 -- scripts/common.sh@352 -- # local d=1 00:23:56.799 14:30:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.799 14:30:02 -- scripts/common.sh@354 -- # echo 1 00:23:56.799 14:30:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:56.799 14:30:02 -- scripts/common.sh@365 -- # decimal 2 00:23:56.799 14:30:02 -- scripts/common.sh@352 -- # local d=2 00:23:56.799 14:30:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.799 14:30:02 -- scripts/common.sh@354 -- # echo 2 00:23:56.799 14:30:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:56.799 14:30:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:56.799 14:30:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:56.799 14:30:02 -- scripts/common.sh@367 -- # return 0 00:23:56.799 14:30:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.799 14:30:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.799 --rc genhtml_branch_coverage=1 00:23:56.799 --rc genhtml_function_coverage=1 00:23:56.799 --rc genhtml_legend=1 00:23:56.799 --rc geninfo_all_blocks=1 00:23:56.799 --rc geninfo_unexecuted_blocks=1 00:23:56.799 00:23:56.799 ' 00:23:56.799 14:30:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.799 --rc genhtml_branch_coverage=1 00:23:56.799 --rc genhtml_function_coverage=1 00:23:56.799 --rc genhtml_legend=1 00:23:56.799 --rc geninfo_all_blocks=1 00:23:56.799 --rc geninfo_unexecuted_blocks=1 00:23:56.799 00:23:56.799 ' 00:23:56.799 14:30:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.799 --rc genhtml_branch_coverage=1 00:23:56.799 --rc genhtml_function_coverage=1 00:23:56.799 --rc genhtml_legend=1 00:23:56.799 --rc geninfo_all_blocks=1 00:23:56.799 --rc geninfo_unexecuted_blocks=1 00:23:56.799 00:23:56.799 ' 00:23:56.799 14:30:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.799 --rc genhtml_branch_coverage=1 00:23:56.799 --rc genhtml_function_coverage=1 00:23:56.799 --rc genhtml_legend=1 00:23:56.799 --rc geninfo_all_blocks=1 00:23:56.799 --rc geninfo_unexecuted_blocks=1 00:23:56.799 00:23:56.799 ' 00:23:56.799 14:30:02 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:56.799 14:30:02 -- nvmf/common.sh@7 -- # uname -s 00:23:56.799 14:30:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.799 14:30:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.799 14:30:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.799 14:30:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.799 14:30:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.799 14:30:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.799 14:30:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.799 14:30:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.799 14:30:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.799 14:30:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.799 14:30:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:23:56.799 14:30:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:23:56.799 14:30:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.799 14:30:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.799 14:30:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:56.799 14:30:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:56.799 14:30:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.799 14:30:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.799 14:30:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.799 14:30:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.799 14:30:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.799 14:30:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.799 14:30:02 -- paths/export.sh@5 -- # export PATH 00:23:56.799 14:30:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.799 14:30:02 -- nvmf/common.sh@46 -- # : 0 00:23:56.799 14:30:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:56.799 14:30:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:56.799 14:30:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:56.799 14:30:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.799 14:30:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.799 14:30:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:56.799 14:30:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:56.799 14:30:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:56.799 14:30:02 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:56.799 14:30:02 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:56.799 14:30:02 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.799 14:30:02 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:56.799 14:30:02 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.799 14:30:02 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:56.799 14:30:02 -- host/multipath.sh@30 -- # nvmftestinit 00:23:56.799 14:30:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:56.799 14:30:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.799 14:30:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:56.799 14:30:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:56.799 14:30:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:56.799 14:30:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.799 14:30:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.799 14:30:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.799 14:30:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:56.799 14:30:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:56.799 14:30:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:56.799 14:30:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:56.799 14:30:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:56.799 14:30:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:56.799 14:30:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.799 14:30:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.799 14:30:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:56.799 14:30:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:56.799 14:30:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:56.799 14:30:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:56.799 14:30:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:56.799 14:30:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.799 14:30:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:56.799 14:30:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:56.799 14:30:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:56.799 14:30:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:56.799 14:30:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:56.799 14:30:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:56.799 Cannot find device "nvmf_tgt_br" 00:23:56.799 14:30:02 -- nvmf/common.sh@154 -- # true 00:23:56.799 14:30:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:56.799 Cannot find device "nvmf_tgt_br2" 00:23:56.799 14:30:02 -- nvmf/common.sh@155 -- # true 00:23:56.799 14:30:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:56.799 14:30:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:56.799 Cannot find device "nvmf_tgt_br" 00:23:56.799 14:30:02 -- nvmf/common.sh@157 -- # true 00:23:56.799 14:30:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:56.799 Cannot find device "nvmf_tgt_br2" 00:23:56.799 14:30:02 -- nvmf/common.sh@158 -- # true 00:23:56.799 14:30:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:56.799 14:30:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:56.799 14:30:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:56.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:56.799 14:30:02 -- nvmf/common.sh@161 -- # true 00:23:56.799 14:30:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:56.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:56.799 14:30:02 -- nvmf/common.sh@162 -- # true 00:23:56.799 14:30:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:56.799 14:30:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:56.799 14:30:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:56.799 14:30:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:56.799 14:30:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:57.058 14:30:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:57.058 14:30:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:57.058 14:30:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:57.058 14:30:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:57.058 14:30:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:57.058 14:30:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:57.058 14:30:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:57.058 14:30:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:57.058 14:30:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:57.058 14:30:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:57.058 14:30:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:57.058 14:30:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:57.058 14:30:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:57.058 14:30:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:57.058 14:30:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:57.058 14:30:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:57.058 14:30:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:57.058 14:30:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:57.058 14:30:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:57.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:23:57.058 00:23:57.058 --- 10.0.0.2 ping statistics --- 00:23:57.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.058 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:57.058 14:30:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:57.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:57.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:57.058 00:23:57.058 --- 10.0.0.3 ping statistics --- 00:23:57.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.059 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:57.059 14:30:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:57.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:57.059 00:23:57.059 --- 10.0.0.1 ping statistics --- 00:23:57.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.059 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:57.059 14:30:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.059 14:30:02 -- nvmf/common.sh@421 -- # return 0 00:23:57.059 14:30:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.059 14:30:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.059 14:30:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:57.059 14:30:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:57.059 14:30:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.059 14:30:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:57.059 14:30:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:57.059 14:30:02 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:57.059 14:30:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.059 14:30:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.059 14:30:02 -- common/autotest_common.sh@10 -- # set +x 00:23:57.059 14:30:02 -- nvmf/common.sh@469 -- # nvmfpid=99035 00:23:57.059 14:30:02 -- nvmf/common.sh@470 -- # waitforlisten 99035 00:23:57.059 14:30:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:57.059 14:30:02 -- common/autotest_common.sh@829 -- # '[' -z 99035 ']' 00:23:57.059 14:30:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.059 14:30:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.059 14:30:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.059 14:30:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.059 14:30:02 -- common/autotest_common.sh@10 -- # set +x 00:23:57.059 [2024-12-05 14:30:02.652897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.059 [2024-12-05 14:30:02.653008] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.317 [2024-12-05 14:30:02.790007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:57.317 [2024-12-05 14:30:02.899815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.317 [2024-12-05 14:30:02.899994] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.317 [2024-12-05 14:30:02.900010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.317 [2024-12-05 14:30:02.900019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.317 [2024-12-05 14:30:02.900151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.317 [2024-12-05 14:30:02.900519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.254 14:30:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.254 14:30:03 -- common/autotest_common.sh@862 -- # return 0 00:23:58.254 14:30:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.254 14:30:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.254 14:30:03 -- common/autotest_common.sh@10 -- # set +x 00:23:58.254 14:30:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.254 14:30:03 -- host/multipath.sh@33 -- # nvmfapp_pid=99035 00:23:58.254 14:30:03 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:58.512 [2024-12-05 14:30:03.958953] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.512 14:30:03 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:58.772 Malloc0 00:23:58.772 14:30:04 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:59.031 14:30:04 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.031 14:30:04 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.290 [2024-12-05 14:30:04.811526] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.290 14:30:04 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:59.549 [2024-12-05 14:30:05.015567] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:59.549 14:30:05 -- host/multipath.sh@44 -- # bdevperf_pid=99140 00:23:59.549 14:30:05 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:59.549 14:30:05 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.549 14:30:05 -- host/multipath.sh@47 -- # waitforlisten 99140 /var/tmp/bdevperf.sock 00:23:59.549 14:30:05 -- common/autotest_common.sh@829 -- # '[' -z 99140 ']' 00:23:59.549 14:30:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.549 14:30:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.549 14:30:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.549 14:30:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.549 14:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:00.483 14:30:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.483 14:30:06 -- common/autotest_common.sh@862 -- # return 0 00:24:00.483 14:30:06 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:01.050 14:30:06 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:01.309 Nvme0n1 00:24:01.310 14:30:06 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:01.568 Nvme0n1 00:24:01.568 14:30:07 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:01.568 14:30:07 -- host/multipath.sh@78 -- # sleep 1 00:24:02.505 14:30:08 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:02.505 14:30:08 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.765 14:30:08 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:03.024 14:30:08 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:03.024 14:30:08 -- host/multipath.sh@65 -- # dtrace_pid=99227 00:24:03.024 14:30:08 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:03.024 14:30:08 -- host/multipath.sh@66 -- # sleep 6 00:24:09.596 14:30:14 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:09.596 14:30:14 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:09.596 14:30:14 -- host/multipath.sh@67 -- # active_port=4421 00:24:09.596 14:30:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:09.596 Attaching 4 probes... 00:24:09.596 @path[10.0.0.2, 4421]: 22602 00:24:09.596 @path[10.0.0.2, 4421]: 23437 00:24:09.596 @path[10.0.0.2, 4421]: 23251 00:24:09.596 @path[10.0.0.2, 4421]: 23334 00:24:09.596 @path[10.0.0.2, 4421]: 23496 00:24:09.596 14:30:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:09.596 14:30:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:09.596 14:30:14 -- host/multipath.sh@69 -- # sed -n 1p 00:24:09.596 14:30:14 -- host/multipath.sh@69 -- # port=4421 00:24:09.596 14:30:14 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.597 14:30:14 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.597 14:30:14 -- host/multipath.sh@72 -- # kill 99227 00:24:09.597 14:30:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:09.597 14:30:14 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:09.597 14:30:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.597 14:30:15 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:09.861 14:30:15 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:09.861 14:30:15 -- host/multipath.sh@65 -- # dtrace_pid=99363 00:24:09.861 14:30:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:09.861 14:30:15 -- host/multipath.sh@66 -- # sleep 6 00:24:16.456 14:30:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:16.456 14:30:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:16.456 14:30:21 -- host/multipath.sh@67 -- # active_port=4420 00:24:16.456 14:30:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:16.456 Attaching 4 probes... 00:24:16.456 @path[10.0.0.2, 4420]: 22845 00:24:16.456 @path[10.0.0.2, 4420]: 23285 00:24:16.456 @path[10.0.0.2, 4420]: 23216 00:24:16.456 @path[10.0.0.2, 4420]: 23184 00:24:16.456 @path[10.0.0.2, 4420]: 23086 00:24:16.456 14:30:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:16.456 14:30:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:16.456 14:30:21 -- host/multipath.sh@69 -- # sed -n 1p 00:24:16.456 14:30:21 -- host/multipath.sh@69 -- # port=4420 00:24:16.456 14:30:21 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:16.456 14:30:21 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:16.456 14:30:21 -- host/multipath.sh@72 -- # kill 99363 00:24:16.456 14:30:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:16.456 14:30:21 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:16.456 14:30:21 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:16.456 14:30:21 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:16.715 14:30:22 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:16.715 14:30:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:16.715 14:30:22 -- host/multipath.sh@65 -- # dtrace_pid=99493 00:24:16.715 14:30:22 -- host/multipath.sh@66 -- # sleep 6 00:24:23.298 14:30:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:23.298 14:30:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:23.298 14:30:28 -- host/multipath.sh@67 -- # active_port=4421 00:24:23.298 14:30:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:23.298 Attaching 4 probes... 00:24:23.298 @path[10.0.0.2, 4421]: 14722 00:24:23.298 @path[10.0.0.2, 4421]: 21073 00:24:23.298 @path[10.0.0.2, 4421]: 21070 00:24:23.298 @path[10.0.0.2, 4421]: 21055 00:24:23.298 @path[10.0.0.2, 4421]: 21137 00:24:23.298 14:30:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:23.299 14:30:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:23.299 14:30:28 -- host/multipath.sh@69 -- # sed -n 1p 00:24:23.299 14:30:28 -- host/multipath.sh@69 -- # port=4421 00:24:23.299 14:30:28 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.299 14:30:28 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.299 14:30:28 -- host/multipath.sh@72 -- # kill 99493 00:24:23.299 14:30:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:23.299 14:30:28 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:23.299 14:30:28 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:23.299 14:30:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:23.559 14:30:28 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:23.559 14:30:28 -- host/multipath.sh@65 -- # dtrace_pid=99628 00:24:23.559 14:30:28 -- host/multipath.sh@66 -- # sleep 6 00:24:23.559 14:30:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:30.128 14:30:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:30.128 14:30:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:30.128 14:30:35 -- host/multipath.sh@67 -- # active_port= 00:24:30.128 14:30:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:30.128 Attaching 4 probes... 00:24:30.128 00:24:30.128 00:24:30.128 00:24:30.128 00:24:30.128 00:24:30.128 14:30:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:30.128 14:30:35 -- host/multipath.sh@69 -- # sed -n 1p 00:24:30.128 14:30:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:30.128 14:30:35 -- host/multipath.sh@69 -- # port= 00:24:30.128 14:30:35 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:30.128 14:30:35 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:30.128 14:30:35 -- host/multipath.sh@72 -- # kill 99628 00:24:30.128 14:30:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:30.128 14:30:35 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:30.128 14:30:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:30.128 14:30:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.386 14:30:35 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:30.386 14:30:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:30.386 14:30:35 -- host/multipath.sh@65 -- # dtrace_pid=99760 00:24:30.386 14:30:35 -- host/multipath.sh@66 -- # sleep 6 00:24:37.002 14:30:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:37.002 14:30:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:37.002 14:30:42 -- host/multipath.sh@67 -- # active_port=4421 00:24:37.002 14:30:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:37.002 Attaching 4 probes... 00:24:37.002 @path[10.0.0.2, 4421]: 20364 00:24:37.002 @path[10.0.0.2, 4421]: 20892 00:24:37.002 @path[10.0.0.2, 4421]: 20885 00:24:37.002 @path[10.0.0.2, 4421]: 20769 00:24:37.002 @path[10.0.0.2, 4421]: 20944 00:24:37.002 14:30:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:37.002 14:30:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:37.002 14:30:42 -- host/multipath.sh@69 -- # sed -n 1p 00:24:37.002 14:30:42 -- host/multipath.sh@69 -- # port=4421 00:24:37.002 14:30:42 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:37.002 14:30:42 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:37.002 14:30:42 -- host/multipath.sh@72 -- # kill 99760 00:24:37.002 14:30:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:37.002 14:30:42 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.002 [2024-12-05 14:30:42.382567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.002 [2024-12-05 14:30:42.382766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.382994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 [2024-12-05 14:30:42.383307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3370 is same with the state(5) to be set 00:24:37.003 14:30:42 -- host/multipath.sh@101 -- # sleep 1 00:24:37.937 14:30:43 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:37.937 14:30:43 -- host/multipath.sh@65 -- # dtrace_pid=99894 00:24:37.937 14:30:43 -- host/multipath.sh@66 -- # sleep 6 00:24:37.937 14:30:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:44.499 14:30:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:44.499 14:30:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:44.499 14:30:49 -- host/multipath.sh@67 -- # active_port=4420 00:24:44.499 14:30:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:44.499 Attaching 4 probes... 00:24:44.499 @path[10.0.0.2, 4420]: 20495 00:24:44.499 @path[10.0.0.2, 4420]: 20976 00:24:44.499 @path[10.0.0.2, 4420]: 21066 00:24:44.499 @path[10.0.0.2, 4420]: 21063 00:24:44.499 @path[10.0.0.2, 4420]: 21013 00:24:44.499 14:30:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:44.499 14:30:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:44.499 14:30:49 -- host/multipath.sh@69 -- # sed -n 1p 00:24:44.499 14:30:49 -- host/multipath.sh@69 -- # port=4420 00:24:44.499 14:30:49 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:44.499 14:30:49 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:44.499 14:30:49 -- host/multipath.sh@72 -- # kill 99894 00:24:44.499 14:30:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:44.499 14:30:49 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.499 [2024-12-05 14:30:49.892855] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.499 14:30:49 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:44.499 14:30:50 -- host/multipath.sh@111 -- # sleep 6 00:24:51.073 14:30:56 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:51.073 14:30:56 -- host/multipath.sh@65 -- # dtrace_pid=100088 00:24:51.073 14:30:56 -- host/multipath.sh@66 -- # sleep 6 00:24:51.073 14:30:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99035 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:57.669 14:31:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:57.669 14:31:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:57.669 14:31:02 -- host/multipath.sh@67 -- # active_port=4421 00:24:57.669 14:31:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.669 Attaching 4 probes... 00:24:57.669 @path[10.0.0.2, 4421]: 20338 00:24:57.669 @path[10.0.0.2, 4421]: 20644 00:24:57.669 @path[10.0.0.2, 4421]: 20733 00:24:57.669 @path[10.0.0.2, 4421]: 20499 00:24:57.669 @path[10.0.0.2, 4421]: 20797 00:24:57.670 14:31:02 -- host/multipath.sh@69 -- # sed -n 1p 00:24:57.670 14:31:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:57.670 14:31:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:57.670 14:31:02 -- host/multipath.sh@69 -- # port=4421 00:24:57.670 14:31:02 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:57.670 14:31:02 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:57.670 14:31:02 -- host/multipath.sh@72 -- # kill 100088 00:24:57.670 14:31:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.670 14:31:02 -- host/multipath.sh@114 -- # killprocess 99140 00:24:57.670 14:31:02 -- common/autotest_common.sh@936 -- # '[' -z 99140 ']' 00:24:57.670 14:31:02 -- common/autotest_common.sh@940 -- # kill -0 99140 00:24:57.670 14:31:02 -- common/autotest_common.sh@941 -- # uname 00:24:57.670 14:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.670 14:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99140 00:24:57.670 killing process with pid 99140 00:24:57.670 14:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:57.670 14:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:57.670 14:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99140' 00:24:57.670 14:31:02 -- common/autotest_common.sh@955 -- # kill 99140 00:24:57.670 14:31:02 -- common/autotest_common.sh@960 -- # wait 99140 00:24:57.670 Connection closed with partial response: 00:24:57.670 00:24:57.670 00:24:57.670 14:31:02 -- host/multipath.sh@116 -- # wait 99140 00:24:57.670 14:31:02 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:57.670 [2024-12-05 14:30:05.070062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:57.670 [2024-12-05 14:30:05.070145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99140 ] 00:24:57.670 [2024-12-05 14:30:05.193722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.670 [2024-12-05 14:30:05.252749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.670 Running I/O for 90 seconds... 00:24:57.670 [2024-12-05 14:30:15.352525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.352575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.352625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.352656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.352687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.352717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.352747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.352777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.352834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.352923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.352943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.352976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.353525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.353555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.353736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.353782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.353794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.354455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.354490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.354522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.354553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.670 [2024-12-05 14:30:15.354586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.670 [2024-12-05 14:30:15.354618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.670 [2024-12-05 14:30:15.354636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.354649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.354680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.354711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.354743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.354775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.354806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.354898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.354934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.354987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.671 [2024-12-05 14:30:15.355783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.355968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.355986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.356006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.356021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.356734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.356760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.671 [2024-12-05 14:30:15.356784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.671 [2024-12-05 14:30:15.356799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.356850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.356880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.356904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.356918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.356938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.356952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.356996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.357966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.357999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.358032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.358065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.358108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.358141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.672 [2024-12-05 14:30:15.358174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.672 [2024-12-05 14:30:15.358234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.672 [2024-12-05 14:30:15.358253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.358265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.358296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.358336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.358368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.358399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.358430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.358462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.358494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.358965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.358991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.359927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.359972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.360009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.360023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.360042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-12-05 14:30:15.360055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.360075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.360089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.360115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-12-05 14:30:15.360129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.673 [2024-12-05 14:30:15.360157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.360937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-12-05 14:30:15.360968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.360994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-12-05 14:30:15.361196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.674 [2024-12-05 14:30:15.361219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.361327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.361439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.361471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.361533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.361564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.361627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.361676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.361689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.373243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.373301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.373327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.373343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.374508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-12-05 14:30:15.374663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.374978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.374997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.375011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.375030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.675 [2024-12-05 14:30:15.375043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.675 [2024-12-05 14:30:15.375062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.375859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.375981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.375995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.376088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.376269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.376358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.376447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-12-05 14:30:15.376492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.676 [2024-12-05 14:30:15.376563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.676 [2024-12-05 14:30:15.376581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.376607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.376633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.376661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.376680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.376706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.376725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.376751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.376769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.376795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.376813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.376858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.376878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.377688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.377723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.377756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.377777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.377821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.377845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.377873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.377892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.377918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.377937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.377963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.377982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.378971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.378997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.379015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.379042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.379060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.379086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.379105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.379131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.677 [2024-12-05 14:30:15.379148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.379183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.379203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.677 [2024-12-05 14:30:15.379229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.677 [2024-12-05 14:30:15.379248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.379425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.379896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.379982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.380098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.380230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.380274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.678 [2024-12-05 14:30:15.380373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.380490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.380508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.678 [2024-12-05 14:30:15.381613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.678 [2024-12-05 14:30:15.381631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.381675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.381735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.381780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.381852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.381898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.381943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.381970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.381988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.382920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.382965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.383009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.383053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.383098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.383143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.383187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.383231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.383275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.383321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.679 [2024-12-05 14:30:15.383365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.679 [2024-12-05 14:30:15.383421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.679 [2024-12-05 14:30:15.383449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.383468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.383520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.383565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.383610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.383654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.383699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.383743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.383788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.383850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.383896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.383941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.383981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.384850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.384876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.384895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.385884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.385919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.385954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.385975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.386003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.680 [2024-12-05 14:30:15.386022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.386049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.386067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.386111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.386152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.386172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.386198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.386216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.680 [2024-12-05 14:30:15.386242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.680 [2024-12-05 14:30:15.386261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.386305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.386527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.386616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.386759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.386954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.386980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.386998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.387043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.387220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.387718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.387883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.387929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.681 [2024-12-05 14:30:15.387991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.681 [2024-12-05 14:30:15.388018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.681 [2024-12-05 14:30:15.388036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.388081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.388164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.388977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.388995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.682 [2024-12-05 14:30:15.389849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.682 [2024-12-05 14:30:15.389893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.682 [2024-12-05 14:30:15.389914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.389928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.389946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.389959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.389978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.389991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.683 [2024-12-05 14:30:15.390767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.390979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.683 [2024-12-05 14:30:15.390992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.683 [2024-12-05 14:30:15.391010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.391023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.391041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.391054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.391072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.391085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.391103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.391116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.391134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.391147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.391180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.391193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.391224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.391236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.399186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.399218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.399239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.399253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.399963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.399995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.684 [2024-12-05 14:30:15.400946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.684 [2024-12-05 14:30:15.400978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.684 [2024-12-05 14:30:15.400996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.401354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.401439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.401468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.401554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.401620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.401637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.401650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.402445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.685 [2024-12-05 14:30:15.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.685 [2024-12-05 14:30:15.402685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.685 [2024-12-05 14:30:15.402697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.402974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.402987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.686 [2024-12-05 14:30:15.403944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.686 [2024-12-05 14:30:15.403973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.686 [2024-12-05 14:30:15.403986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.404534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.404578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.404590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.405337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.405402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.405430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.687 [2024-12-05 14:30:15.405613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.687 [2024-12-05 14:30:15.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.687 [2024-12-05 14:30:15.405700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.405758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.405786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.405831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.405886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.405926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.405959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.405977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.405990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.406914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.406933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.688 [2024-12-05 14:30:15.406946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.407427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.688 [2024-12-05 14:30:15.407454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.688 [2024-12-05 14:30:15.407478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.407766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.407918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.407972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.689 [2024-12-05 14:30:15.408703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.689 [2024-12-05 14:30:15.408719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.689 [2024-12-05 14:30:15.408731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.408760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.408856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.408902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.408933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.408964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.408981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.408994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.409513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.409696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.409708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.417098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.417133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.417162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.417192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.417221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.690 [2024-12-05 14:30:15.417250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.417268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.417280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.418132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.418159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.690 [2024-12-05 14:30:15.418211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.690 [2024-12-05 14:30:15.418244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.418945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.418980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.418998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.419010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.691 [2024-12-05 14:30:15.419124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.691 [2024-12-05 14:30:15.419389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.691 [2024-12-05 14:30:15.419405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:15.419417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:15.419446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:15.419474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:15.419503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:15.419532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:15.419560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:15.419589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:15.419617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:15.419645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.419662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:15.419674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:15.420209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:15.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:21.924319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:21.924661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:21.924788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:21.924906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.924963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.924979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.925000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:21.925014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.925034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.925049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.925069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.692 [2024-12-05 14:30:21.925083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.925103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.925117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.925138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.692 [2024-12-05 14:30:21.925152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.692 [2024-12-05 14:30:21.925213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.925321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.925614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.925631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.925651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.926573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.926727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.926818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.926864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.926909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.926943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.926974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.926992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.927208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.927239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.927269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.927301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.693 [2024-12-05 14:30:21.927341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.693 [2024-12-05 14:30:21.927389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.693 [2024-12-05 14:30:21.927408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.927908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.927940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.927989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.928037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.928225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.928287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.928378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.928391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.929218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.929295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.694 [2024-12-05 14:30:21.929327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.929359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.694 [2024-12-05 14:30:21.929377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.694 [2024-12-05 14:30:21.929390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.929955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.929973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.929986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.930088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.930151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.930244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.930313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.695 [2024-12-05 14:30:21.930475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.695 [2024-12-05 14:30:21.930650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.695 [2024-12-05 14:30:21.930662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.930680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.930693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.930712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.930724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.930743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.930756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.931924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.931986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.932000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.932031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.932095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.932134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.932168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.932199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.696 [2024-12-05 14:30:21.932230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.696 [2024-12-05 14:30:21.932248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.696 [2024-12-05 14:30:21.932262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.932947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.932978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.932997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.697 [2024-12-05 14:30:21.933361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.697 [2024-12-05 14:30:21.933487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.697 [2024-12-05 14:30:21.933505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.933518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.933536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.933548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.933566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.933579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.933603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.933617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.933635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.933649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.934902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.934983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.934996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.935027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.935285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.935303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.946356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.946408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.946427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.946448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.946462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.946480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.946493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.946511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.698 [2024-12-05 14:30:21.946524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.946542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.698 [2024-12-05 14:30:21.946555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.698 [2024-12-05 14:30:21.946574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.946587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.946679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.946758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.946952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.946972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.946986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.947931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.947981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.699 [2024-12-05 14:30:21.948607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.699 [2024-12-05 14:30:21.948655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.699 [2024-12-05 14:30:21.948668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.948699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.948737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.948769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.948800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.948864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.948917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.948952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.948973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.948987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.949948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.949979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.949998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.700 [2024-12-05 14:30:21.950011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.950029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.700 [2024-12-05 14:30:21.950042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.700 [2024-12-05 14:30:21.950068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.950536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.950554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.950567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.701 [2024-12-05 14:30:21.951801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.701 [2024-12-05 14:30:21.951886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.701 [2024-12-05 14:30:21.951899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.951918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.951931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.951961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.951985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.952318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.952417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.952478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.952570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.952631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.952792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.952978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.952997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.953028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.953543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.953569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.953593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.702 [2024-12-05 14:30:21.953608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.953626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.702 [2024-12-05 14:30:21.953639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.702 [2024-12-05 14:30:21.953658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.953671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.953714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.953745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.953776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.953874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.953906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.953937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.953969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.953987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.703 [2024-12-05 14:30:21.954681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.703 [2024-12-05 14:30:21.954969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.703 [2024-12-05 14:30:21.954988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.955792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.955964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.956013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.704 [2024-12-05 14:30:21.956079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.956980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.956994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.704 [2024-12-05 14:30:21.957013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.704 [2024-12-05 14:30:21.957032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.957065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.957129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.957161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.957238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.957269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.957301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.957338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.957358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.957372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.705 [2024-12-05 14:30:21.965888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.705 [2024-12-05 14:30:21.965919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.705 [2024-12-05 14:30:21.965946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.965960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.965978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.965991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.966144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.966976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.706 [2024-12-05 14:30:21.967690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.706 [2024-12-05 14:30:21.967721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.706 [2024-12-05 14:30:21.967739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.967752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.967782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.967830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.967894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.967926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.967970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.967991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.968878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.707 [2024-12-05 14:30:21.968972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.707 [2024-12-05 14:30:21.968990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.707 [2024-12-05 14:30:21.969003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.969065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.969096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.969127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.969158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.969188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.969219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.969255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.969969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.969993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.708 [2024-12-05 14:30:21.970677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.708 [2024-12-05 14:30:21.970900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.708 [2024-12-05 14:30:21.970913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.970931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.970943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.970961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.970974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.970998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.971012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.971117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.971180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.971273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.971335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.971497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.971638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.971651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.972326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.972495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.972526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.709 [2024-12-05 14:30:21.972588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.972619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.709 [2024-12-05 14:30:21.972650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.709 [2024-12-05 14:30:21.972668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.972681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.972712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.972744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.972781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.972829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.972864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.972896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.972928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.972959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.972977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.972989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.710 [2024-12-05 14:30:21.973762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.710 [2024-12-05 14:30:21.973781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.710 [2024-12-05 14:30:21.973799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.973836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.973850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.973869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.973901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.973914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.973932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.973945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.973963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.973976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.974564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.974583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.974596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.975247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.711 [2024-12-05 14:30:21.975272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:57.711 [2024-12-05 14:30:21.975297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.711 [2024-12-05 14:30:21.975313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.975915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.975976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.975990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.976008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.976021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.976040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.976053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.976070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.976083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.983615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.983708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.712 [2024-12-05 14:30:21.983739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.712 [2024-12-05 14:30:21.983757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.712 [2024-12-05 14:30:21.983770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.983788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.983814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.983837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.983851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.983869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.983882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.983908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.983940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.983966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.983986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.984124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.984924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.984958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.984979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.984992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.985034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.985071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.985105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.985139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.985174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.985208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.985243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.985277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.713 [2024-12-05 14:30:21.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.985346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-12-05 14:30:21.985380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.713 [2024-12-05 14:30:21.985402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.985853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.985968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.985990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-12-05 14:30:21.986696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.714 [2024-12-05 14:30:21.986758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-12-05 14:30:21.986772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.986793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:21.986819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.986843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:21.986856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.986878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:21.986892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.986913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:21.986926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.986948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:21.986961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.986982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:21.986995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.987017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:21.987029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.987051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:21.987063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.987085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:21.987098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.987119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:21.987131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.987153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:21.987166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:21.987325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:21.987353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:28.968430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:28.968531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:28.968591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:28.968621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:28.968650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-12-05 14:30:28.968680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.968984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.968999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.969019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.969033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.969054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.969068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.969088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.969103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-12-05 14:30:28.969137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.715 [2024-12-05 14:30:28.969172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.969440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.969470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.969488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.969501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.970053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.970143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.970239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.970418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.970456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.970987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.971001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.971038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.971076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.971114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.971152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.971219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-12-05 14:30:28.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.716 [2024-12-05 14:30:28.971276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-12-05 14:30:28.971289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.971324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.971440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.971475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.971513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.971548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.971941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.971975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.971991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.717 [2024-12-05 14:30:28.972933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.972959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.972974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.973001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:57.717 [2024-12-05 14:30:28.973042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.717 [2024-12-05 14:30:28.973064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.718 [2024-12-05 14:30:28.973107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.718 [2024-12-05 14:30:28.973148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.718 [2024-12-05 14:30:28.973650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.718 [2024-12-05 14:30:28.973769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.718 [2024-12-05 14:30:28.973808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:28.973890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:28.973921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.718 [2024-12-05 14:30:28.973937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.383924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.383991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.718 [2024-12-05 14:30:42.384572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.718 [2024-12-05 14:30:42.384585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.384998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.719 [2024-12-05 14:30:42.385521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.719 [2024-12-05 14:30:42.385571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.719 [2024-12-05 14:30:42.385584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.385780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.385893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.385975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.385989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.386438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.386488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.386544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.386593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.386618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.720 [2024-12-05 14:30:42.386668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.720 [2024-12-05 14:30:42.386724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.720 [2024-12-05 14:30:42.386738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.386948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.386980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.386994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.721 [2024-12-05 14:30:42.387513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.721 [2024-12-05 14:30:42.387686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1762060 is same with the state(5) to be set 00:24:57.721 [2024-12-05 14:30:42.387714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.721 [2024-12-05 14:30:42.387724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.721 [2024-12-05 14:30:42.387733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59848 len:8 PRP1 0x0 PRP2 0x0 00:24:57.721 [2024-12-05 14:30:42.387744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.721 [2024-12-05 14:30:42.387801] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1762060 was disconnected and freed. reset controller. 00:24:57.721 [2024-12-05 14:30:42.389153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-12-05 14:30:42.389268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1773a00 (9): Bad file descriptor 00:24:57.721 [2024-12-05 14:30:42.389389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-12-05 14:30:42.389443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-12-05 14:30:42.389463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1773a00 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-12-05 14:30:42.389477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773a00 is same with the state(5) to be set 00:24:57.721 [2024-12-05 14:30:42.389499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1773a00 (9): Bad file descriptor 00:24:57.721 [2024-12-05 14:30:42.389520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-12-05 14:30:42.389534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-12-05 14:30:42.389547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-12-05 14:30:42.389569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-12-05 14:30:42.389583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-12-05 14:30:52.441030] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.722 Received shutdown signal, test time was about 55.309036 seconds 00:24:57.722 00:24:57.722 Latency(us) 00:24:57.722 [2024-12-05T14:31:03.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.722 [2024-12-05T14:31:03.370Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:57.722 Verification LBA range: start 0x0 length 0x4000 00:24:57.722 Nvme0n1 : 55.31 12272.91 47.94 0.00 0.00 10413.99 662.81 7015926.69 00:24:57.722 [2024-12-05T14:31:03.370Z] =================================================================================================================== 00:24:57.722 [2024-12-05T14:31:03.370Z] Total : 12272.91 47.94 0.00 0.00 10413.99 662.81 7015926.69 00:24:57.722 14:31:02 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.722 14:31:02 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:57.722 14:31:02 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:57.722 14:31:02 -- host/multipath.sh@125 -- # nvmftestfini 00:24:57.722 14:31:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:57.722 14:31:02 -- nvmf/common.sh@116 -- # sync 00:24:57.722 14:31:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:57.722 14:31:02 -- nvmf/common.sh@119 -- # set +e 00:24:57.722 14:31:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:57.722 14:31:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:57.722 rmmod nvme_tcp 00:24:57.722 rmmod nvme_fabrics 00:24:57.722 rmmod nvme_keyring 00:24:57.722 14:31:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:57.722 14:31:03 -- nvmf/common.sh@123 -- # set -e 00:24:57.722 14:31:03 -- nvmf/common.sh@124 -- # return 0 00:24:57.722 14:31:03 -- nvmf/common.sh@477 -- # '[' -n 99035 ']' 00:24:57.722 14:31:03 -- nvmf/common.sh@478 -- # killprocess 99035 00:24:57.722 14:31:03 -- common/autotest_common.sh@936 -- # '[' -z 99035 ']' 00:24:57.722 14:31:03 -- common/autotest_common.sh@940 -- # kill -0 99035 00:24:57.722 14:31:03 -- common/autotest_common.sh@941 -- # uname 00:24:57.722 14:31:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.722 14:31:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99035 00:24:57.722 killing process with pid 99035 00:24:57.722 14:31:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:57.722 14:31:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:57.722 14:31:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99035' 00:24:57.722 14:31:03 -- common/autotest_common.sh@955 -- # kill 99035 00:24:57.722 14:31:03 -- common/autotest_common.sh@960 -- # wait 99035 00:24:57.981 14:31:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:57.981 14:31:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:57.981 14:31:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:57.981 14:31:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.981 14:31:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:57.981 14:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.981 14:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.981 14:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.981 14:31:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:57.981 00:24:57.981 real 1m1.308s 00:24:57.981 user 2m51.616s 00:24:57.981 sys 0m14.516s 00:24:57.981 14:31:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:57.981 14:31:03 -- common/autotest_common.sh@10 -- # set +x 00:24:57.981 ************************************ 00:24:57.981 END TEST nvmf_multipath 00:24:57.981 ************************************ 00:24:57.981 14:31:03 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:57.981 14:31:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:57.981 14:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.981 14:31:03 -- common/autotest_common.sh@10 -- # set +x 00:24:57.981 ************************************ 00:24:57.981 START TEST nvmf_timeout 00:24:57.981 ************************************ 00:24:57.981 14:31:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:57.981 * Looking for test storage... 00:24:57.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:57.981 14:31:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:57.981 14:31:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:57.981 14:31:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:57.981 14:31:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:57.981 14:31:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:57.981 14:31:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:57.981 14:31:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:57.981 14:31:03 -- scripts/common.sh@335 -- # IFS=.-: 00:24:57.981 14:31:03 -- scripts/common.sh@335 -- # read -ra ver1 00:24:57.981 14:31:03 -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.981 14:31:03 -- scripts/common.sh@336 -- # read -ra ver2 00:24:57.981 14:31:03 -- scripts/common.sh@337 -- # local 'op=<' 00:24:57.981 14:31:03 -- scripts/common.sh@339 -- # ver1_l=2 00:24:57.981 14:31:03 -- scripts/common.sh@340 -- # ver2_l=1 00:24:57.981 14:31:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:57.981 14:31:03 -- scripts/common.sh@343 -- # case "$op" in 00:24:57.981 14:31:03 -- scripts/common.sh@344 -- # : 1 00:24:57.981 14:31:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:57.981 14:31:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.981 14:31:03 -- scripts/common.sh@364 -- # decimal 1 00:24:57.981 14:31:03 -- scripts/common.sh@352 -- # local d=1 00:24:57.981 14:31:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.981 14:31:03 -- scripts/common.sh@354 -- # echo 1 00:24:57.981 14:31:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:57.981 14:31:03 -- scripts/common.sh@365 -- # decimal 2 00:24:57.981 14:31:03 -- scripts/common.sh@352 -- # local d=2 00:24:57.981 14:31:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.981 14:31:03 -- scripts/common.sh@354 -- # echo 2 00:24:57.981 14:31:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:57.981 14:31:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:57.981 14:31:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:57.981 14:31:03 -- scripts/common.sh@367 -- # return 0 00:24:57.981 14:31:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.982 14:31:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.982 --rc genhtml_branch_coverage=1 00:24:57.982 --rc genhtml_function_coverage=1 00:24:57.982 --rc genhtml_legend=1 00:24:57.982 --rc geninfo_all_blocks=1 00:24:57.982 --rc geninfo_unexecuted_blocks=1 00:24:57.982 00:24:57.982 ' 00:24:57.982 14:31:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.982 --rc genhtml_branch_coverage=1 00:24:57.982 --rc genhtml_function_coverage=1 00:24:57.982 --rc genhtml_legend=1 00:24:57.982 --rc geninfo_all_blocks=1 00:24:57.982 --rc geninfo_unexecuted_blocks=1 00:24:57.982 00:24:57.982 ' 00:24:57.982 14:31:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.982 --rc genhtml_branch_coverage=1 00:24:57.982 --rc genhtml_function_coverage=1 00:24:57.982 --rc genhtml_legend=1 00:24:57.982 --rc geninfo_all_blocks=1 00:24:57.982 --rc geninfo_unexecuted_blocks=1 00:24:57.982 00:24:57.982 ' 00:24:57.982 14:31:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.982 --rc genhtml_branch_coverage=1 00:24:57.982 --rc genhtml_function_coverage=1 00:24:57.982 --rc genhtml_legend=1 00:24:57.982 --rc geninfo_all_blocks=1 00:24:57.982 --rc geninfo_unexecuted_blocks=1 00:24:57.982 00:24:57.982 ' 00:24:57.982 14:31:03 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:57.982 14:31:03 -- nvmf/common.sh@7 -- # uname -s 00:24:57.982 14:31:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.982 14:31:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.982 14:31:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.982 14:31:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.982 14:31:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.982 14:31:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.982 14:31:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.982 14:31:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.982 14:31:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.982 14:31:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.982 14:31:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:24:57.982 14:31:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:24:57.982 14:31:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.982 14:31:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.982 14:31:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:57.982 14:31:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:57.982 14:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.982 14:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.982 14:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.982 14:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.982 14:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.982 14:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.982 14:31:03 -- paths/export.sh@5 -- # export PATH 00:24:57.982 14:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.982 14:31:03 -- nvmf/common.sh@46 -- # : 0 00:24:57.982 14:31:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:57.982 14:31:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:57.982 14:31:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:57.982 14:31:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.982 14:31:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.982 14:31:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:57.982 14:31:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:57.982 14:31:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:57.982 14:31:03 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.982 14:31:03 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.982 14:31:03 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.982 14:31:03 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:57.982 14:31:03 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.982 14:31:03 -- host/timeout.sh@19 -- # nvmftestinit 00:24:57.982 14:31:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:57.982 14:31:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.982 14:31:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:57.982 14:31:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:57.982 14:31:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:57.982 14:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.982 14:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.982 14:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.241 14:31:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:58.241 14:31:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:58.241 14:31:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:58.241 14:31:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:58.241 14:31:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:58.241 14:31:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:58.241 14:31:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.241 14:31:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.241 14:31:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:58.241 14:31:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:58.241 14:31:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:58.241 14:31:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:58.241 14:31:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:58.241 14:31:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.241 14:31:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:58.241 14:31:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:58.241 14:31:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:58.241 14:31:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:58.241 14:31:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:58.241 14:31:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:58.241 Cannot find device "nvmf_tgt_br" 00:24:58.241 14:31:03 -- nvmf/common.sh@154 -- # true 00:24:58.241 14:31:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.241 Cannot find device "nvmf_tgt_br2" 00:24:58.241 14:31:03 -- nvmf/common.sh@155 -- # true 00:24:58.241 14:31:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:58.241 14:31:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:58.241 Cannot find device "nvmf_tgt_br" 00:24:58.241 14:31:03 -- nvmf/common.sh@157 -- # true 00:24:58.241 14:31:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:58.241 Cannot find device "nvmf_tgt_br2" 00:24:58.241 14:31:03 -- nvmf/common.sh@158 -- # true 00:24:58.241 14:31:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:58.241 14:31:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:58.241 14:31:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.241 14:31:03 -- nvmf/common.sh@161 -- # true 00:24:58.241 14:31:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.241 14:31:03 -- nvmf/common.sh@162 -- # true 00:24:58.241 14:31:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:58.241 14:31:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:58.241 14:31:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:58.241 14:31:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:58.242 14:31:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:58.242 14:31:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.242 14:31:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.242 14:31:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:58.242 14:31:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:58.242 14:31:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:58.242 14:31:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:58.242 14:31:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:58.242 14:31:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:58.242 14:31:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.242 14:31:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.242 14:31:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.242 14:31:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:58.500 14:31:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:58.501 14:31:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.501 14:31:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.501 14:31:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.501 14:31:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.501 14:31:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.501 14:31:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:58.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:24:58.501 00:24:58.501 --- 10.0.0.2 ping statistics --- 00:24:58.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.501 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:58.501 14:31:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:58.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:24:58.501 00:24:58.501 --- 10.0.0.3 ping statistics --- 00:24:58.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.501 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:24:58.501 14:31:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:58.501 00:24:58.501 --- 10.0.0.1 ping statistics --- 00:24:58.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.501 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:58.501 14:31:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.501 14:31:03 -- nvmf/common.sh@421 -- # return 0 00:24:58.501 14:31:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:58.501 14:31:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.501 14:31:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:58.501 14:31:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:58.501 14:31:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.501 14:31:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:58.501 14:31:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:58.501 14:31:03 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:58.501 14:31:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:58.501 14:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.501 14:31:03 -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 14:31:03 -- nvmf/common.sh@469 -- # nvmfpid=100415 00:24:58.501 14:31:03 -- nvmf/common.sh@470 -- # waitforlisten 100415 00:24:58.501 14:31:03 -- common/autotest_common.sh@829 -- # '[' -z 100415 ']' 00:24:58.501 14:31:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.501 14:31:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.501 14:31:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:58.501 14:31:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.501 14:31:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.501 14:31:03 -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 [2024-12-05 14:31:04.029531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:58.501 [2024-12-05 14:31:04.029616] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.760 [2024-12-05 14:31:04.163160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:58.760 [2024-12-05 14:31:04.221494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:58.760 [2024-12-05 14:31:04.221618] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.760 [2024-12-05 14:31:04.221629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.760 [2024-12-05 14:31:04.221636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.760 [2024-12-05 14:31:04.222489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.760 [2024-12-05 14:31:04.222536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.328 14:31:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.328 14:31:04 -- common/autotest_common.sh@862 -- # return 0 00:24:59.328 14:31:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:59.328 14:31:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.328 14:31:04 -- common/autotest_common.sh@10 -- # set +x 00:24:59.587 14:31:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.587 14:31:05 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.587 14:31:05 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:59.587 [2024-12-05 14:31:05.196760] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.587 14:31:05 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:59.845 Malloc0 00:24:59.845 14:31:05 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.102 14:31:05 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.360 14:31:05 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.618 [2024-12-05 14:31:06.061044] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.618 14:31:06 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:00.618 14:31:06 -- host/timeout.sh@32 -- # bdevperf_pid=100502 00:25:00.618 14:31:06 -- host/timeout.sh@34 -- # waitforlisten 100502 /var/tmp/bdevperf.sock 00:25:00.618 14:31:06 -- common/autotest_common.sh@829 -- # '[' -z 100502 ']' 00:25:00.618 14:31:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.618 14:31:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.618 14:31:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.618 14:31:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.618 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:25:00.618 [2024-12-05 14:31:06.116796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:00.618 [2024-12-05 14:31:06.116896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100502 ] 00:25:00.618 [2024-12-05 14:31:06.252453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.876 [2024-12-05 14:31:06.331150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.444 14:31:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.444 14:31:06 -- common/autotest_common.sh@862 -- # return 0 00:25:01.444 14:31:06 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:01.703 14:31:07 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:01.963 NVMe0n1 00:25:01.963 14:31:07 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.963 14:31:07 -- host/timeout.sh@51 -- # rpc_pid=100553 00:25:01.963 14:31:07 -- host/timeout.sh@53 -- # sleep 1 00:25:01.963 Running I/O for 10 seconds... 00:25:02.899 14:31:08 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.161 [2024-12-05 14:31:08.640749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.641981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.161 [2024-12-05 14:31:08.642981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.643940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.644981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.645907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f490 is same with the state(5) to be set 00:25:03.162 [2024-12-05 14:31:08.646324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.162 [2024-12-05 14:31:08.646744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.162 [2024-12-05 14:31:08.646752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.646988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.646998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.163 [2024-12-05 14:31:08.647236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.163 [2024-12-05 14:31:08.647273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.163 [2024-12-05 14:31:08.647291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.163 [2024-12-05 14:31:08.647310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.163 [2024-12-05 14:31:08.647490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.163 [2024-12-05 14:31:08.647564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.163 [2024-12-05 14:31:08.647583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.163 [2024-12-05 14:31:08.647593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.647887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.647985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.647994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.164 [2024-12-05 14:31:08.648392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.164 [2024-12-05 14:31:08.648422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.164 [2024-12-05 14:31:08.648432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.165 [2024-12-05 14:31:08.648871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.648989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.648999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.165 [2024-12-05 14:31:08.649008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.649018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132f780 is same with the state(5) to be set 00:25:03.165 [2024-12-05 14:31:08.649029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.165 [2024-12-05 14:31:08.649037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.165 [2024-12-05 14:31:08.649045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:8 PRP1 0x0 PRP2 0x0 00:25:03.165 [2024-12-05 14:31:08.649053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.165 [2024-12-05 14:31:08.649100] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x132f780 was disconnected and freed. reset controller. 00:25:03.165 [2024-12-05 14:31:08.649323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.165 [2024-12-05 14:31:08.649407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa8c0 (9): Bad file descriptor 00:25:03.165 [2024-12-05 14:31:08.649501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.165 [2024-12-05 14:31:08.649546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.165 [2024-12-05 14:31:08.649562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aa8c0 with addr=10.0.0.2, port=4420 00:25:03.165 [2024-12-05 14:31:08.649572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa8c0 is same with the state(5) to be set 00:25:03.165 [2024-12-05 14:31:08.649589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa8c0 (9): Bad file descriptor 00:25:03.165 [2024-12-05 14:31:08.649604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.165 [2024-12-05 14:31:08.649613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.165 [2024-12-05 14:31:08.649623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.165 [2024-12-05 14:31:08.649641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.165 [2024-12-05 14:31:08.649651] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.165 14:31:08 -- host/timeout.sh@56 -- # sleep 2 00:25:05.070 [2024-12-05 14:31:10.649743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-12-05 14:31:10.649844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-12-05 14:31:10.649864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aa8c0 with addr=10.0.0.2, port=4420 00:25:05.070 [2024-12-05 14:31:10.649877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa8c0 is same with the state(5) to be set 00:25:05.070 [2024-12-05 14:31:10.649900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa8c0 (9): Bad file descriptor 00:25:05.070 [2024-12-05 14:31:10.649928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.070 [2024-12-05 14:31:10.649939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.070 [2024-12-05 14:31:10.649950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.070 [2024-12-05 14:31:10.649973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.070 [2024-12-05 14:31:10.649983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.070 14:31:10 -- host/timeout.sh@57 -- # get_controller 00:25:05.070 14:31:10 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:05.070 14:31:10 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:05.329 14:31:10 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:05.329 14:31:10 -- host/timeout.sh@58 -- # get_bdev 00:25:05.329 14:31:10 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:05.329 14:31:10 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:05.587 14:31:11 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:05.587 14:31:11 -- host/timeout.sh@61 -- # sleep 5 00:25:07.524 [2024-12-05 14:31:12.650116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.524 [2024-12-05 14:31:12.650206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.524 [2024-12-05 14:31:12.650224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aa8c0 with addr=10.0.0.2, port=4420 00:25:07.524 [2024-12-05 14:31:12.650238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa8c0 is same with the state(5) to be set 00:25:07.524 [2024-12-05 14:31:12.650260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa8c0 (9): Bad file descriptor 00:25:07.524 [2024-12-05 14:31:12.650277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.524 [2024-12-05 14:31:12.650287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.524 [2024-12-05 14:31:12.650297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.524 [2024-12-05 14:31:12.650322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.524 [2024-12-05 14:31:12.650333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.455 [2024-12-05 14:31:14.650363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.455 [2024-12-05 14:31:14.650415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.455 [2024-12-05 14:31:14.650442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.455 [2024-12-05 14:31:14.650452] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:09.455 [2024-12-05 14:31:14.650476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.021 00:25:10.021 Latency(us) 00:25:10.021 [2024-12-05T14:31:15.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.021 [2024-12-05T14:31:15.669Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:10.021 Verification LBA range: start 0x0 length 0x4000 00:25:10.021 NVMe0n1 : 8.12 2033.25 7.94 15.76 0.00 62357.44 2338.44 7046430.72 00:25:10.021 [2024-12-05T14:31:15.669Z] =================================================================================================================== 00:25:10.021 [2024-12-05T14:31:15.669Z] Total : 2033.25 7.94 15.76 0.00 62357.44 2338.44 7046430.72 00:25:10.021 0 00:25:10.587 14:31:16 -- host/timeout.sh@62 -- # get_controller 00:25:10.587 14:31:16 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.587 14:31:16 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:10.846 14:31:16 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:10.846 14:31:16 -- host/timeout.sh@63 -- # get_bdev 00:25:10.846 14:31:16 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:10.846 14:31:16 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:11.104 14:31:16 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:11.104 14:31:16 -- host/timeout.sh@65 -- # wait 100553 00:25:11.104 14:31:16 -- host/timeout.sh@67 -- # killprocess 100502 00:25:11.104 14:31:16 -- common/autotest_common.sh@936 -- # '[' -z 100502 ']' 00:25:11.104 14:31:16 -- common/autotest_common.sh@940 -- # kill -0 100502 00:25:11.104 14:31:16 -- common/autotest_common.sh@941 -- # uname 00:25:11.104 14:31:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.104 14:31:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100502 00:25:11.104 14:31:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:11.104 14:31:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:11.104 killing process with pid 100502 00:25:11.104 14:31:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100502' 00:25:11.104 Received shutdown signal, test time was about 9.159725 seconds 00:25:11.104 00:25:11.104 Latency(us) 00:25:11.104 [2024-12-05T14:31:16.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.104 [2024-12-05T14:31:16.752Z] =================================================================================================================== 00:25:11.104 [2024-12-05T14:31:16.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.104 14:31:16 -- common/autotest_common.sh@955 -- # kill 100502 00:25:11.104 14:31:16 -- common/autotest_common.sh@960 -- # wait 100502 00:25:11.362 14:31:16 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.619 [2024-12-05 14:31:17.071503] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.619 14:31:17 -- host/timeout.sh@74 -- # bdevperf_pid=100706 00:25:11.619 14:31:17 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:11.619 14:31:17 -- host/timeout.sh@76 -- # waitforlisten 100706 /var/tmp/bdevperf.sock 00:25:11.619 14:31:17 -- common/autotest_common.sh@829 -- # '[' -z 100706 ']' 00:25:11.619 14:31:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.619 14:31:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.619 14:31:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.619 14:31:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.619 14:31:17 -- common/autotest_common.sh@10 -- # set +x 00:25:11.619 [2024-12-05 14:31:17.131277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:11.619 [2024-12-05 14:31:17.131366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100706 ] 00:25:11.619 [2024-12-05 14:31:17.264710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.878 [2024-12-05 14:31:17.327636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.815 14:31:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.815 14:31:18 -- common/autotest_common.sh@862 -- # return 0 00:25:12.815 14:31:18 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:12.815 14:31:18 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:13.074 NVMe0n1 00:25:13.074 14:31:18 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:13.074 14:31:18 -- host/timeout.sh@84 -- # rpc_pid=100755 00:25:13.074 14:31:18 -- host/timeout.sh@86 -- # sleep 1 00:25:13.074 Running I/O for 10 seconds... 00:25:14.010 14:31:19 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.273 [2024-12-05 14:31:19.826730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.826997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04ca0 is same with the state(5) to be set 00:25:14.274 [2024-12-05 14:31:19.827707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.274 [2024-12-05 14:31:19.827960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.274 [2024-12-05 14:31:19.827971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.275 [2024-12-05 14:31:19.828792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.275 [2024-12-05 14:31:19.828877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.275 [2024-12-05 14:31:19.828889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.828901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.828920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.828930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.828940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.828949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.828959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.828968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.828979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.828987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.828998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.276 [2024-12-05 14:31:19.829621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.276 [2024-12-05 14:31:19.829670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.276 [2024-12-05 14:31:19.829680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.829749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.829788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.829859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.829898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.829917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.829986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.829995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.277 [2024-12-05 14:31:19.830313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.277 [2024-12-05 14:31:19.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce660 is same with the state(5) to be set 00:25:14.277 [2024-12-05 14:31:19.830477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:14.277 [2024-12-05 14:31:19.830485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:14.277 [2024-12-05 14:31:19.830492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10600 len:8 PRP1 0x0 PRP2 0x0 00:25:14.277 [2024-12-05 14:31:19.830501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.277 [2024-12-05 14:31:19.830554] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ce660 was disconnected and freed. reset controller. 00:25:14.277 [2024-12-05 14:31:19.830782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.277 [2024-12-05 14:31:19.830867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:14.277 [2024-12-05 14:31:19.830967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.277 [2024-12-05 14:31:19.831014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.278 [2024-12-05 14:31:19.831031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21498c0 with addr=10.0.0.2, port=4420 00:25:14.278 [2024-12-05 14:31:19.831040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:14.278 [2024-12-05 14:31:19.831057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:14.278 [2024-12-05 14:31:19.831072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.278 [2024-12-05 14:31:19.831081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.278 [2024-12-05 14:31:19.831091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.278 [2024-12-05 14:31:19.831109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.278 [2024-12-05 14:31:19.831120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.278 14:31:19 -- host/timeout.sh@90 -- # sleep 1 00:25:15.213 [2024-12-05 14:31:20.831208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-12-05 14:31:20.831317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.213 [2024-12-05 14:31:20.831334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21498c0 with addr=10.0.0.2, port=4420 00:25:15.213 [2024-12-05 14:31:20.831345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:15.213 [2024-12-05 14:31:20.831366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:15.213 [2024-12-05 14:31:20.831383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.213 [2024-12-05 14:31:20.831393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.213 [2024-12-05 14:31:20.831402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.213 [2024-12-05 14:31:20.831424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.213 [2024-12-05 14:31:20.831435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.213 14:31:20 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.471 [2024-12-05 14:31:21.032657] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.471 14:31:21 -- host/timeout.sh@92 -- # wait 100755 00:25:16.409 [2024-12-05 14:31:21.845493] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:24.527 00:25:24.527 Latency(us) 00:25:24.527 [2024-12-05T14:31:30.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.527 [2024-12-05T14:31:30.175Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:24.527 Verification LBA range: start 0x0 length 0x4000 00:25:24.527 NVMe0n1 : 10.01 10770.25 42.07 0.00 0.00 11868.48 1184.12 3019898.88 00:25:24.527 [2024-12-05T14:31:30.175Z] =================================================================================================================== 00:25:24.527 [2024-12-05T14:31:30.176Z] Total : 10770.25 42.07 0.00 0.00 11868.48 1184.12 3019898.88 00:25:24.528 0 00:25:24.528 14:31:28 -- host/timeout.sh@97 -- # rpc_pid=100872 00:25:24.528 14:31:28 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.528 14:31:28 -- host/timeout.sh@98 -- # sleep 1 00:25:24.528 Running I/O for 10 seconds... 00:25:24.528 14:31:29 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.528 [2024-12-05 14:31:29.984769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.528 [2024-12-05 14:31:29.985798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.985948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60110 is same with the state(5) to be set 00:25:24.529 [2024-12-05 14:31:29.986442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.986911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.986934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.986955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.986976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.986987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.986997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.987018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.987039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.529 [2024-12-05 14:31:29.987060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.987080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.987101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.987122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.987144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.987165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.529 [2024-12-05 14:31:29.987186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.529 [2024-12-05 14:31:29.987198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.987911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.987970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.987992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.988002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.988014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.988024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.988036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.988046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.988057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.988067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.988079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.530 [2024-12-05 14:31:29.988096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.988108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.530 [2024-12-05 14:31:29.988118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.530 [2024-12-05 14:31:29.988129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.988958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.988979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.988990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.531 [2024-12-05 14:31:29.989000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.989011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.531 [2024-12-05 14:31:29.989020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.531 [2024-12-05 14:31:29.989031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.532 [2024-12-05 14:31:29.989041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.532 [2024-12-05 14:31:29.989062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.532 [2024-12-05 14:31:29.989083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.532 [2024-12-05 14:31:29.989141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.532 [2024-12-05 14:31:29.989163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.532 [2024-12-05 14:31:29.989215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.532 [2024-12-05 14:31:29.989382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a1d0 is same with the state(5) to be set 00:25:24.532 [2024-12-05 14:31:29.989403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:24.532 [2024-12-05 14:31:29.989411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:24.532 [2024-12-05 14:31:29.989419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13680 len:8 PRP1 0x0 PRP2 0x0 00:25:24.532 [2024-12-05 14:31:29.989428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989482] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x219a1d0 was disconnected and freed. reset controller. 00:25:24.532 [2024-12-05 14:31:29.989569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.532 [2024-12-05 14:31:29.989584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.532 [2024-12-05 14:31:29.989603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.532 [2024-12-05 14:31:29.989620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.532 [2024-12-05 14:31:29.989638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.532 [2024-12-05 14:31:29.989647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:24.532 [2024-12-05 14:31:29.989934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.532 [2024-12-05 14:31:29.989960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:24.532 [2024-12-05 14:31:29.990058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.532 [2024-12-05 14:31:29.990109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.532 [2024-12-05 14:31:29.990126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21498c0 with addr=10.0.0.2, port=4420 00:25:24.532 [2024-12-05 14:31:29.990137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:24.532 [2024-12-05 14:31:29.990157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:24.532 [2024-12-05 14:31:29.990187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:24.532 [2024-12-05 14:31:29.990218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:24.532 [2024-12-05 14:31:29.990258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.532 [2024-12-05 14:31:30.000539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:24.532 [2024-12-05 14:31:30.000574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.532 14:31:30 -- host/timeout.sh@101 -- # sleep 3 00:25:25.469 [2024-12-05 14:31:31.000666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.469 [2024-12-05 14:31:31.000754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.469 [2024-12-05 14:31:31.000773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21498c0 with addr=10.0.0.2, port=4420 00:25:25.469 [2024-12-05 14:31:31.000784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:25.469 [2024-12-05 14:31:31.000805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:25.469 [2024-12-05 14:31:31.000859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:25.469 [2024-12-05 14:31:31.000871] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:25.469 [2024-12-05 14:31:31.000881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:25.469 [2024-12-05 14:31:31.000914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.469 [2024-12-05 14:31:31.000926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.406 [2024-12-05 14:31:32.001019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.406 [2024-12-05 14:31:32.001094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.406 [2024-12-05 14:31:32.001111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21498c0 with addr=10.0.0.2, port=4420 00:25:26.406 [2024-12-05 14:31:32.001122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:26.406 [2024-12-05 14:31:32.001143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:26.406 [2024-12-05 14:31:32.001170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:26.406 [2024-12-05 14:31:32.001181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:26.406 [2024-12-05 14:31:32.001191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.406 [2024-12-05 14:31:32.001212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.406 [2024-12-05 14:31:32.001222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.785 [2024-12-05 14:31:33.001590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.785 [2024-12-05 14:31:33.001884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.785 [2024-12-05 14:31:33.001946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21498c0 with addr=10.0.0.2, port=4420 00:25:27.785 [2024-12-05 14:31:33.002300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21498c0 is same with the state(5) to be set 00:25:27.785 [2024-12-05 14:31:33.002497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21498c0 (9): Bad file descriptor 00:25:27.785 [2024-12-05 14:31:33.002684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.785 [2024-12-05 14:31:33.002738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.785 [2024-12-05 14:31:33.002752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.785 [2024-12-05 14:31:33.004996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.785 [2024-12-05 14:31:33.005155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.785 14:31:33 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.785 [2024-12-05 14:31:33.261614] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.785 14:31:33 -- host/timeout.sh@103 -- # wait 100872 00:25:28.723 [2024-12-05 14:31:34.028684] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.006 00:25:34.006 Latency(us) 00:25:34.006 [2024-12-05T14:31:39.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.006 [2024-12-05T14:31:39.654Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:34.006 Verification LBA range: start 0x0 length 0x4000 00:25:34.006 NVMe0n1 : 10.00 9653.00 37.71 7144.47 0.00 7608.99 551.10 3019898.88 00:25:34.006 [2024-12-05T14:31:39.654Z] =================================================================================================================== 00:25:34.006 [2024-12-05T14:31:39.654Z] Total : 9653.00 37.71 7144.47 0.00 7608.99 0.00 3019898.88 00:25:34.006 0 00:25:34.006 14:31:38 -- host/timeout.sh@105 -- # killprocess 100706 00:25:34.006 14:31:38 -- common/autotest_common.sh@936 -- # '[' -z 100706 ']' 00:25:34.006 14:31:38 -- common/autotest_common.sh@940 -- # kill -0 100706 00:25:34.006 14:31:38 -- common/autotest_common.sh@941 -- # uname 00:25:34.006 14:31:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:34.006 14:31:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100706 00:25:34.006 killing process with pid 100706 00:25:34.006 Received shutdown signal, test time was about 10.000000 seconds 00:25:34.006 00:25:34.006 Latency(us) 00:25:34.006 [2024-12-05T14:31:39.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.006 [2024-12-05T14:31:39.654Z] =================================================================================================================== 00:25:34.006 [2024-12-05T14:31:39.654Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.006 14:31:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:34.006 14:31:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:34.006 14:31:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100706' 00:25:34.006 14:31:38 -- common/autotest_common.sh@955 -- # kill 100706 00:25:34.006 14:31:38 -- common/autotest_common.sh@960 -- # wait 100706 00:25:34.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.006 14:31:39 -- host/timeout.sh@110 -- # bdevperf_pid=100998 00:25:34.006 14:31:39 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:34.006 14:31:39 -- host/timeout.sh@112 -- # waitforlisten 100998 /var/tmp/bdevperf.sock 00:25:34.006 14:31:39 -- common/autotest_common.sh@829 -- # '[' -z 100998 ']' 00:25:34.007 14:31:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.007 14:31:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.007 14:31:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.007 14:31:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.007 14:31:39 -- common/autotest_common.sh@10 -- # set +x 00:25:34.007 [2024-12-05 14:31:39.165312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:34.007 [2024-12-05 14:31:39.165587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100998 ] 00:25:34.007 [2024-12-05 14:31:39.304474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.007 [2024-12-05 14:31:39.364533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.574 14:31:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.574 14:31:40 -- common/autotest_common.sh@862 -- # return 0 00:25:34.574 14:31:40 -- host/timeout.sh@116 -- # dtrace_pid=101026 00:25:34.574 14:31:40 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:34.574 14:31:40 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:34.833 14:31:40 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:35.092 NVMe0n1 00:25:35.092 14:31:40 -- host/timeout.sh@124 -- # rpc_pid=101074 00:25:35.092 14:31:40 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:35.092 14:31:40 -- host/timeout.sh@125 -- # sleep 1 00:25:35.092 Running I/O for 10 seconds... 00:25:36.028 14:31:41 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.290 [2024-12-05 14:31:41.900558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.900994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.290 [2024-12-05 14:31:41.901203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc63ba0 is same with the state(5) to be set 00:25:36.291 [2024-12-05 14:31:41.901695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.901986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.901997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.291 [2024-12-05 14:31:41.902386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.291 [2024-12-05 14:31:41.902396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.902980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.902989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.292 [2024-12-05 14:31:41.903192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.292 [2024-12-05 14:31:41.903201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.903972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.903982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.904016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.904028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.904037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.904047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.904056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.293 [2024-12-05 14:31:41.904067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.293 [2024-12-05 14:31:41.904075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.294 [2024-12-05 14:31:41.904430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f3780 is same with the state(5) to be set 00:25:36.294 [2024-12-05 14:31:41.904451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:36.294 [2024-12-05 14:31:41.904458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:36.294 [2024-12-05 14:31:41.904465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125904 len:8 PRP1 0x0 PRP2 0x0 00:25:36.294 [2024-12-05 14:31:41.904473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.294 [2024-12-05 14:31:41.904537] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f3780 was disconnected and freed. reset controller. 00:25:36.294 [2024-12-05 14:31:41.904792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.294 [2024-12-05 14:31:41.904923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e8c0 (9): Bad file descriptor 00:25:36.294 [2024-12-05 14:31:41.905034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.294 [2024-12-05 14:31:41.905095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.294 [2024-12-05 14:31:41.905112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e8c0 with addr=10.0.0.2, port=4420 00:25:36.294 [2024-12-05 14:31:41.905122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e8c0 is same with the state(5) to be set 00:25:36.294 [2024-12-05 14:31:41.905140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e8c0 (9): Bad file descriptor 00:25:36.294 [2024-12-05 14:31:41.905156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.294 [2024-12-05 14:31:41.905172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.294 [2024-12-05 14:31:41.905197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.294 [2024-12-05 14:31:41.905220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.294 [2024-12-05 14:31:41.905231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.294 14:31:41 -- host/timeout.sh@128 -- # wait 101074 00:25:38.826 [2024-12-05 14:31:43.905348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.826 [2024-12-05 14:31:43.905431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.826 [2024-12-05 14:31:43.905449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e8c0 with addr=10.0.0.2, port=4420 00:25:38.826 [2024-12-05 14:31:43.905459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e8c0 is same with the state(5) to be set 00:25:38.826 [2024-12-05 14:31:43.905478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e8c0 (9): Bad file descriptor 00:25:38.826 [2024-12-05 14:31:43.905494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:38.826 [2024-12-05 14:31:43.905502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:38.827 [2024-12-05 14:31:43.905510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:38.827 [2024-12-05 14:31:43.905530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:38.827 [2024-12-05 14:31:43.905541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.728 [2024-12-05 14:31:45.905641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.728 [2024-12-05 14:31:45.905724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.728 [2024-12-05 14:31:45.905742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e8c0 with addr=10.0.0.2, port=4420 00:25:40.728 [2024-12-05 14:31:45.905752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e8c0 is same with the state(5) to be set 00:25:40.728 [2024-12-05 14:31:45.905770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e8c0 (9): Bad file descriptor 00:25:40.728 [2024-12-05 14:31:45.905795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.728 [2024-12-05 14:31:45.905817] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.728 [2024-12-05 14:31:45.905827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.728 [2024-12-05 14:31:45.905845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.728 [2024-12-05 14:31:45.905856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.632 [2024-12-05 14:31:47.905903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.632 [2024-12-05 14:31:47.905948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.632 [2024-12-05 14:31:47.905958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.632 [2024-12-05 14:31:47.905967] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:42.632 [2024-12-05 14:31:47.905985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.567 00:25:43.567 Latency(us) 00:25:43.567 [2024-12-05T14:31:49.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.567 [2024-12-05T14:31:49.215Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:43.567 NVMe0n1 : 8.17 3368.46 13.16 15.66 0.00 37782.35 1906.50 7015926.69 00:25:43.567 [2024-12-05T14:31:49.215Z] =================================================================================================================== 00:25:43.567 [2024-12-05T14:31:49.215Z] Total : 3368.46 13.16 15.66 0.00 37782.35 1906.50 7015926.69 00:25:43.567 0 00:25:43.567 14:31:48 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:43.567 Attaching 5 probes... 00:25:43.567 1290.527166: reset bdev controller NVMe0 00:25:43.567 1290.710967: reconnect bdev controller NVMe0 00:25:43.567 3290.997224: reconnect delay bdev controller NVMe0 00:25:43.567 3291.029565: reconnect bdev controller NVMe0 00:25:43.567 5291.305185: reconnect delay bdev controller NVMe0 00:25:43.567 5291.321215: reconnect bdev controller NVMe0 00:25:43.567 7291.617714: reconnect delay bdev controller NVMe0 00:25:43.567 7291.631312: reconnect bdev controller NVMe0 00:25:43.567 14:31:48 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:43.567 14:31:48 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:43.567 14:31:48 -- host/timeout.sh@136 -- # kill 101026 00:25:43.567 14:31:48 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:43.567 14:31:48 -- host/timeout.sh@139 -- # killprocess 100998 00:25:43.567 14:31:48 -- common/autotest_common.sh@936 -- # '[' -z 100998 ']' 00:25:43.567 14:31:48 -- common/autotest_common.sh@940 -- # kill -0 100998 00:25:43.567 14:31:48 -- common/autotest_common.sh@941 -- # uname 00:25:43.567 14:31:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.567 14:31:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100998 00:25:43.567 killing process with pid 100998 00:25:43.567 Received shutdown signal, test time was about 8.240599 seconds 00:25:43.567 00:25:43.567 Latency(us) 00:25:43.567 [2024-12-05T14:31:49.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.567 [2024-12-05T14:31:49.215Z] =================================================================================================================== 00:25:43.567 [2024-12-05T14:31:49.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.568 14:31:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:43.568 14:31:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:43.568 14:31:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100998' 00:25:43.568 14:31:48 -- common/autotest_common.sh@955 -- # kill 100998 00:25:43.568 14:31:48 -- common/autotest_common.sh@960 -- # wait 100998 00:25:43.568 14:31:49 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.135 14:31:49 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:44.135 14:31:49 -- host/timeout.sh@145 -- # nvmftestfini 00:25:44.135 14:31:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:44.135 14:31:49 -- nvmf/common.sh@116 -- # sync 00:25:44.135 14:31:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:44.135 14:31:49 -- nvmf/common.sh@119 -- # set +e 00:25:44.135 14:31:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:44.135 14:31:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:44.135 rmmod nvme_tcp 00:25:44.135 rmmod nvme_fabrics 00:25:44.135 rmmod nvme_keyring 00:25:44.135 14:31:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:44.135 14:31:49 -- nvmf/common.sh@123 -- # set -e 00:25:44.135 14:31:49 -- nvmf/common.sh@124 -- # return 0 00:25:44.135 14:31:49 -- nvmf/common.sh@477 -- # '[' -n 100415 ']' 00:25:44.135 14:31:49 -- nvmf/common.sh@478 -- # killprocess 100415 00:25:44.135 14:31:49 -- common/autotest_common.sh@936 -- # '[' -z 100415 ']' 00:25:44.135 14:31:49 -- common/autotest_common.sh@940 -- # kill -0 100415 00:25:44.135 14:31:49 -- common/autotest_common.sh@941 -- # uname 00:25:44.135 14:31:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:44.135 14:31:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100415 00:25:44.135 killing process with pid 100415 00:25:44.135 14:31:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:44.135 14:31:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:44.135 14:31:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100415' 00:25:44.135 14:31:49 -- common/autotest_common.sh@955 -- # kill 100415 00:25:44.135 14:31:49 -- common/autotest_common.sh@960 -- # wait 100415 00:25:44.393 14:31:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:44.393 14:31:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:44.393 14:31:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:44.393 14:31:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.393 14:31:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:44.393 14:31:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.393 14:31:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.393 14:31:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.393 14:31:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:44.393 00:25:44.393 real 0m46.570s 00:25:44.393 user 2m15.445s 00:25:44.393 sys 0m5.436s 00:25:44.393 14:31:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:44.393 ************************************ 00:25:44.393 END TEST nvmf_timeout 00:25:44.393 ************************************ 00:25:44.393 14:31:49 -- common/autotest_common.sh@10 -- # set +x 00:25:44.393 14:31:50 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:44.393 14:31:50 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:44.393 14:31:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:44.393 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.651 14:31:50 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:44.651 00:25:44.651 real 17m30.032s 00:25:44.651 user 55m39.751s 00:25:44.651 sys 3m43.602s 00:25:44.651 14:31:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:44.651 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.651 ************************************ 00:25:44.651 END TEST nvmf_tcp 00:25:44.651 ************************************ 00:25:44.651 14:31:50 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:44.651 14:31:50 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:44.651 14:31:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:44.651 14:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.651 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.651 ************************************ 00:25:44.651 START TEST spdkcli_nvmf_tcp 00:25:44.651 ************************************ 00:25:44.651 14:31:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:44.651 * Looking for test storage... 00:25:44.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:44.651 14:31:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:44.651 14:31:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:44.651 14:31:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:44.651 14:31:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:44.651 14:31:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:44.651 14:31:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:44.651 14:31:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:44.651 14:31:50 -- scripts/common.sh@335 -- # IFS=.-: 00:25:44.651 14:31:50 -- scripts/common.sh@335 -- # read -ra ver1 00:25:44.651 14:31:50 -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.651 14:31:50 -- scripts/common.sh@336 -- # read -ra ver2 00:25:44.651 14:31:50 -- scripts/common.sh@337 -- # local 'op=<' 00:25:44.651 14:31:50 -- scripts/common.sh@339 -- # ver1_l=2 00:25:44.651 14:31:50 -- scripts/common.sh@340 -- # ver2_l=1 00:25:44.651 14:31:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:44.651 14:31:50 -- scripts/common.sh@343 -- # case "$op" in 00:25:44.651 14:31:50 -- scripts/common.sh@344 -- # : 1 00:25:44.651 14:31:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:44.651 14:31:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.651 14:31:50 -- scripts/common.sh@364 -- # decimal 1 00:25:44.651 14:31:50 -- scripts/common.sh@352 -- # local d=1 00:25:44.651 14:31:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.651 14:31:50 -- scripts/common.sh@354 -- # echo 1 00:25:44.651 14:31:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:44.651 14:31:50 -- scripts/common.sh@365 -- # decimal 2 00:25:44.651 14:31:50 -- scripts/common.sh@352 -- # local d=2 00:25:44.651 14:31:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.651 14:31:50 -- scripts/common.sh@354 -- # echo 2 00:25:44.651 14:31:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:44.651 14:31:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:44.651 14:31:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:44.651 14:31:50 -- scripts/common.sh@367 -- # return 0 00:25:44.651 14:31:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.651 14:31:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.651 --rc genhtml_branch_coverage=1 00:25:44.651 --rc genhtml_function_coverage=1 00:25:44.651 --rc genhtml_legend=1 00:25:44.651 --rc geninfo_all_blocks=1 00:25:44.651 --rc geninfo_unexecuted_blocks=1 00:25:44.651 00:25:44.651 ' 00:25:44.651 14:31:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.651 --rc genhtml_branch_coverage=1 00:25:44.651 --rc genhtml_function_coverage=1 00:25:44.651 --rc genhtml_legend=1 00:25:44.651 --rc geninfo_all_blocks=1 00:25:44.651 --rc geninfo_unexecuted_blocks=1 00:25:44.651 00:25:44.651 ' 00:25:44.651 14:31:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.651 --rc genhtml_branch_coverage=1 00:25:44.651 --rc genhtml_function_coverage=1 00:25:44.651 --rc genhtml_legend=1 00:25:44.651 --rc geninfo_all_blocks=1 00:25:44.651 --rc geninfo_unexecuted_blocks=1 00:25:44.651 00:25:44.651 ' 00:25:44.651 14:31:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.651 --rc genhtml_branch_coverage=1 00:25:44.652 --rc genhtml_function_coverage=1 00:25:44.652 --rc genhtml_legend=1 00:25:44.652 --rc geninfo_all_blocks=1 00:25:44.652 --rc geninfo_unexecuted_blocks=1 00:25:44.652 00:25:44.652 ' 00:25:44.652 14:31:50 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:44.652 14:31:50 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:44.652 14:31:50 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:44.652 14:31:50 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:44.652 14:31:50 -- nvmf/common.sh@7 -- # uname -s 00:25:44.911 14:31:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.911 14:31:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.911 14:31:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.911 14:31:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.911 14:31:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.911 14:31:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.911 14:31:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.911 14:31:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.911 14:31:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.911 14:31:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.911 14:31:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:25:44.912 14:31:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:25:44.912 14:31:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.912 14:31:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.912 14:31:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:44.912 14:31:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:44.912 14:31:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.912 14:31:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.912 14:31:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.912 14:31:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 14:31:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 14:31:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 14:31:50 -- paths/export.sh@5 -- # export PATH 00:25:44.912 14:31:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 14:31:50 -- nvmf/common.sh@46 -- # : 0 00:25:44.912 14:31:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:44.912 14:31:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:44.912 14:31:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:44.912 14:31:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.912 14:31:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.912 14:31:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:44.912 14:31:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:44.912 14:31:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:44.912 14:31:50 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:44.912 14:31:50 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:44.912 14:31:50 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:44.912 14:31:50 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:44.912 14:31:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:44.912 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.912 14:31:50 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:44.912 14:31:50 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101309 00:25:44.912 14:31:50 -- spdkcli/common.sh@34 -- # waitforlisten 101309 00:25:44.912 14:31:50 -- common/autotest_common.sh@829 -- # '[' -z 101309 ']' 00:25:44.912 14:31:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.912 14:31:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.912 14:31:50 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:44.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.912 14:31:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.912 14:31:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.912 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.912 [2024-12-05 14:31:50.383437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.912 [2024-12-05 14:31:50.383552] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101309 ] 00:25:44.912 [2024-12-05 14:31:50.521500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:45.171 [2024-12-05 14:31:50.594425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:45.171 [2024-12-05 14:31:50.594784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.171 [2024-12-05 14:31:50.594793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.738 14:31:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.738 14:31:51 -- common/autotest_common.sh@862 -- # return 0 00:25:45.738 14:31:51 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:45.738 14:31:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.738 14:31:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.996 14:31:51 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:45.996 14:31:51 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:45.996 14:31:51 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:45.996 14:31:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.996 14:31:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.996 14:31:51 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:45.996 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:45.996 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:45.996 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:45.996 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:45.996 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:45.996 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:45.996 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:45.996 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:45.996 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:45.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:45.996 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:45.996 ' 00:25:46.254 [2024-12-05 14:31:51.879900] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:48.781 [2024-12-05 14:31:54.143711] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.210 [2024-12-05 14:31:55.428813] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:52.750 [2024-12-05 14:31:57.814294] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:54.652 [2024-12-05 14:31:59.879600] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:56.029 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:56.029 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:56.029 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:56.029 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:56.029 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:56.029 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:56.029 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:56.029 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:56.029 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:56.029 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:56.029 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.030 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.030 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:56.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:56.030 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:56.030 14:32:01 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:56.030 14:32:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.030 14:32:01 -- common/autotest_common.sh@10 -- # set +x 00:25:56.030 14:32:01 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:56.030 14:32:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.030 14:32:01 -- common/autotest_common.sh@10 -- # set +x 00:25:56.030 14:32:01 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:56.030 14:32:01 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:56.597 14:32:02 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:56.597 14:32:02 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:56.597 14:32:02 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:56.597 14:32:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.597 14:32:02 -- common/autotest_common.sh@10 -- # set +x 00:25:56.597 14:32:02 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:56.597 14:32:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.597 14:32:02 -- common/autotest_common.sh@10 -- # set +x 00:25:56.597 14:32:02 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:56.597 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:56.597 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:56.597 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:56.597 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:56.597 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:56.597 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:56.597 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:56.597 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:56.597 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:56.597 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:56.597 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:56.597 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:56.597 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:56.597 ' 00:26:03.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:03.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:03.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:03.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:03.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:03.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:03.162 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:03.162 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:03.162 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:03.162 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:03.162 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:03.162 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:03.162 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:03.162 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:03.162 14:32:07 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:03.162 14:32:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:03.162 14:32:07 -- common/autotest_common.sh@10 -- # set +x 00:26:03.162 14:32:07 -- spdkcli/nvmf.sh@90 -- # killprocess 101309 00:26:03.162 14:32:07 -- common/autotest_common.sh@936 -- # '[' -z 101309 ']' 00:26:03.162 14:32:07 -- common/autotest_common.sh@940 -- # kill -0 101309 00:26:03.162 14:32:07 -- common/autotest_common.sh@941 -- # uname 00:26:03.162 14:32:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:03.162 14:32:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101309 00:26:03.163 14:32:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:03.163 14:32:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:03.163 killing process with pid 101309 00:26:03.163 14:32:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101309' 00:26:03.163 14:32:07 -- common/autotest_common.sh@955 -- # kill 101309 00:26:03.163 [2024-12-05 14:32:07.779214] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:03.163 14:32:07 -- common/autotest_common.sh@960 -- # wait 101309 00:26:03.163 14:32:07 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:03.163 14:32:07 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:03.163 14:32:07 -- spdkcli/common.sh@13 -- # '[' -n 101309 ']' 00:26:03.163 14:32:07 -- spdkcli/common.sh@14 -- # killprocess 101309 00:26:03.163 14:32:07 -- common/autotest_common.sh@936 -- # '[' -z 101309 ']' 00:26:03.163 14:32:07 -- common/autotest_common.sh@940 -- # kill -0 101309 00:26:03.163 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101309) - No such process 00:26:03.163 Process with pid 101309 is not found 00:26:03.163 14:32:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101309 is not found' 00:26:03.163 14:32:07 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:03.163 14:32:07 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:03.163 14:32:07 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:03.163 00:26:03.163 real 0m17.858s 00:26:03.163 user 0m38.662s 00:26:03.163 sys 0m1.035s 00:26:03.163 14:32:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:03.163 ************************************ 00:26:03.163 END TEST spdkcli_nvmf_tcp 00:26:03.163 ************************************ 00:26:03.163 14:32:07 -- common/autotest_common.sh@10 -- # set +x 00:26:03.163 14:32:08 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:03.163 14:32:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:03.163 14:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.163 14:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:03.163 ************************************ 00:26:03.163 START TEST nvmf_identify_passthru 00:26:03.163 ************************************ 00:26:03.163 14:32:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:03.163 * Looking for test storage... 00:26:03.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:03.163 14:32:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:03.163 14:32:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:03.163 14:32:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:03.163 14:32:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:03.163 14:32:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:03.163 14:32:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:03.163 14:32:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:03.163 14:32:08 -- scripts/common.sh@335 -- # IFS=.-: 00:26:03.163 14:32:08 -- scripts/common.sh@335 -- # read -ra ver1 00:26:03.163 14:32:08 -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.163 14:32:08 -- scripts/common.sh@336 -- # read -ra ver2 00:26:03.163 14:32:08 -- scripts/common.sh@337 -- # local 'op=<' 00:26:03.163 14:32:08 -- scripts/common.sh@339 -- # ver1_l=2 00:26:03.163 14:32:08 -- scripts/common.sh@340 -- # ver2_l=1 00:26:03.163 14:32:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:03.163 14:32:08 -- scripts/common.sh@343 -- # case "$op" in 00:26:03.163 14:32:08 -- scripts/common.sh@344 -- # : 1 00:26:03.163 14:32:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:03.163 14:32:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.163 14:32:08 -- scripts/common.sh@364 -- # decimal 1 00:26:03.163 14:32:08 -- scripts/common.sh@352 -- # local d=1 00:26:03.163 14:32:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.163 14:32:08 -- scripts/common.sh@354 -- # echo 1 00:26:03.163 14:32:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:03.163 14:32:08 -- scripts/common.sh@365 -- # decimal 2 00:26:03.163 14:32:08 -- scripts/common.sh@352 -- # local d=2 00:26:03.163 14:32:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.163 14:32:08 -- scripts/common.sh@354 -- # echo 2 00:26:03.163 14:32:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:03.163 14:32:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:03.163 14:32:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:03.163 14:32:08 -- scripts/common.sh@367 -- # return 0 00:26:03.163 14:32:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.163 14:32:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:03.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.163 --rc genhtml_branch_coverage=1 00:26:03.163 --rc genhtml_function_coverage=1 00:26:03.163 --rc genhtml_legend=1 00:26:03.163 --rc geninfo_all_blocks=1 00:26:03.163 --rc geninfo_unexecuted_blocks=1 00:26:03.163 00:26:03.163 ' 00:26:03.163 14:32:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:03.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.163 --rc genhtml_branch_coverage=1 00:26:03.163 --rc genhtml_function_coverage=1 00:26:03.163 --rc genhtml_legend=1 00:26:03.163 --rc geninfo_all_blocks=1 00:26:03.163 --rc geninfo_unexecuted_blocks=1 00:26:03.163 00:26:03.163 ' 00:26:03.163 14:32:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:03.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.163 --rc genhtml_branch_coverage=1 00:26:03.163 --rc genhtml_function_coverage=1 00:26:03.163 --rc genhtml_legend=1 00:26:03.163 --rc geninfo_all_blocks=1 00:26:03.163 --rc geninfo_unexecuted_blocks=1 00:26:03.163 00:26:03.163 ' 00:26:03.163 14:32:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:03.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.163 --rc genhtml_branch_coverage=1 00:26:03.163 --rc genhtml_function_coverage=1 00:26:03.163 --rc genhtml_legend=1 00:26:03.163 --rc geninfo_all_blocks=1 00:26:03.163 --rc geninfo_unexecuted_blocks=1 00:26:03.163 00:26:03.163 ' 00:26:03.163 14:32:08 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:03.163 14:32:08 -- nvmf/common.sh@7 -- # uname -s 00:26:03.163 14:32:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.163 14:32:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.163 14:32:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.163 14:32:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.163 14:32:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.163 14:32:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.163 14:32:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.163 14:32:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.163 14:32:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.163 14:32:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.163 14:32:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:26:03.163 14:32:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:26:03.163 14:32:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.163 14:32:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.163 14:32:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:03.163 14:32:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:03.163 14:32:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.163 14:32:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.163 14:32:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.163 14:32:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.163 14:32:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.163 14:32:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.163 14:32:08 -- paths/export.sh@5 -- # export PATH 00:26:03.163 14:32:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.163 14:32:08 -- nvmf/common.sh@46 -- # : 0 00:26:03.163 14:32:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:03.163 14:32:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:03.163 14:32:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:03.163 14:32:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.163 14:32:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.163 14:32:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:03.163 14:32:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:03.163 14:32:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:03.163 14:32:08 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:03.163 14:32:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.163 14:32:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.163 14:32:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.164 14:32:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.164 14:32:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.164 14:32:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.164 14:32:08 -- paths/export.sh@5 -- # export PATH 00:26:03.164 14:32:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.164 14:32:08 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:03.164 14:32:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:03.164 14:32:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.164 14:32:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:03.164 14:32:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:03.164 14:32:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:03.164 14:32:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.164 14:32:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:03.164 14:32:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.164 14:32:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:03.164 14:32:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.164 14:32:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.164 14:32:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:03.164 14:32:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:03.164 14:32:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:03.164 14:32:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:03.164 14:32:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:03.164 14:32:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.164 14:32:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:03.164 14:32:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:03.164 14:32:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:03.164 14:32:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:03.164 14:32:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:03.164 14:32:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:03.164 Cannot find device "nvmf_tgt_br" 00:26:03.164 14:32:08 -- nvmf/common.sh@154 -- # true 00:26:03.164 14:32:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:03.164 Cannot find device "nvmf_tgt_br2" 00:26:03.164 14:32:08 -- nvmf/common.sh@155 -- # true 00:26:03.164 14:32:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:03.164 14:32:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:03.164 Cannot find device "nvmf_tgt_br" 00:26:03.164 14:32:08 -- nvmf/common.sh@157 -- # true 00:26:03.164 14:32:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:03.164 Cannot find device "nvmf_tgt_br2" 00:26:03.164 14:32:08 -- nvmf/common.sh@158 -- # true 00:26:03.164 14:32:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:03.164 14:32:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:03.164 14:32:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:03.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:03.164 14:32:08 -- nvmf/common.sh@161 -- # true 00:26:03.164 14:32:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:03.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:03.164 14:32:08 -- nvmf/common.sh@162 -- # true 00:26:03.164 14:32:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:03.164 14:32:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:03.164 14:32:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:03.164 14:32:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:03.164 14:32:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:03.164 14:32:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:03.164 14:32:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:03.164 14:32:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:03.164 14:32:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:03.164 14:32:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:03.164 14:32:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:03.164 14:32:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:03.164 14:32:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:03.164 14:32:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:03.164 14:32:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:03.164 14:32:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:03.164 14:32:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:03.164 14:32:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:03.164 14:32:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:03.164 14:32:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:03.164 14:32:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:03.164 14:32:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:03.164 14:32:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:03.164 14:32:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:03.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:26:03.164 00:26:03.164 --- 10.0.0.2 ping statistics --- 00:26:03.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.164 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:03.164 14:32:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:03.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:03.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:26:03.164 00:26:03.164 --- 10.0.0.3 ping statistics --- 00:26:03.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.164 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:03.164 14:32:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:03.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:03.164 00:26:03.164 --- 10.0.0.1 ping statistics --- 00:26:03.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.164 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:03.164 14:32:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.164 14:32:08 -- nvmf/common.sh@421 -- # return 0 00:26:03.164 14:32:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:03.164 14:32:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.164 14:32:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:03.164 14:32:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.164 14:32:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:03.164 14:32:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:03.164 14:32:08 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:03.164 14:32:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:03.164 14:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:03.164 14:32:08 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:03.164 14:32:08 -- common/autotest_common.sh@1519 -- # bdfs=() 00:26:03.164 14:32:08 -- common/autotest_common.sh@1519 -- # local bdfs 00:26:03.164 14:32:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:03.164 14:32:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:03.164 14:32:08 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:03.164 14:32:08 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:03.164 14:32:08 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:03.164 14:32:08 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:03.164 14:32:08 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:03.164 14:32:08 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:26:03.164 14:32:08 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:03.164 14:32:08 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:26:03.164 14:32:08 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:26:03.164 14:32:08 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:26:03.164 14:32:08 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:03.164 14:32:08 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:03.164 14:32:08 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:03.164 14:32:08 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:03.164 14:32:08 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:03.164 14:32:08 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:03.164 14:32:08 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:03.423 14:32:08 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:26:03.423 14:32:08 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:03.423 14:32:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:03.423 14:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:03.423 14:32:09 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:03.423 14:32:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:03.423 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.423 14:32:09 -- target/identify_passthru.sh@31 -- # nvmfpid=101813 00:26:03.423 14:32:09 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:03.423 14:32:09 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:03.423 14:32:09 -- target/identify_passthru.sh@35 -- # waitforlisten 101813 00:26:03.423 14:32:09 -- common/autotest_common.sh@829 -- # '[' -z 101813 ']' 00:26:03.423 14:32:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.423 14:32:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.423 14:32:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.423 14:32:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.423 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 [2024-12-05 14:32:09.080583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:03.682 [2024-12-05 14:32:09.080690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.682 [2024-12-05 14:32:09.221810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.682 [2024-12-05 14:32:09.278734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:03.682 [2024-12-05 14:32:09.279099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.682 [2024-12-05 14:32:09.279190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.682 [2024-12-05 14:32:09.279260] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.682 [2024-12-05 14:32:09.279470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.682 [2024-12-05 14:32:09.279605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.682 [2024-12-05 14:32:09.280252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.682 [2024-12-05 14:32:09.280265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.682 14:32:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.682 14:32:09 -- common/autotest_common.sh@862 -- # return 0 00:26:03.682 14:32:09 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:03.682 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.682 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.941 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.941 14:32:09 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 [2024-12-05 14:32:09.424960] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:03.942 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.942 14:32:09 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 [2024-12-05 14:32:09.439079] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.942 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.942 14:32:09 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:03.942 14:32:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 14:32:09 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 Nvme0n1 00:26:03.942 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.942 14:32:09 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.942 14:32:09 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.942 14:32:09 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 [2024-12-05 14:32:09.576763] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.942 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.942 14:32:09 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:03.942 14:32:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.942 14:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:03.942 [2024-12-05 14:32:09.584584] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:04.201 [ 00:26:04.201 { 00:26:04.201 "allow_any_host": true, 00:26:04.201 "hosts": [], 00:26:04.201 "listen_addresses": [], 00:26:04.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:04.201 "subtype": "Discovery" 00:26:04.201 }, 00:26:04.201 { 00:26:04.201 "allow_any_host": true, 00:26:04.201 "hosts": [], 00:26:04.201 "listen_addresses": [ 00:26:04.201 { 00:26:04.201 "adrfam": "IPv4", 00:26:04.201 "traddr": "10.0.0.2", 00:26:04.201 "transport": "TCP", 00:26:04.201 "trsvcid": "4420", 00:26:04.201 "trtype": "TCP" 00:26:04.201 } 00:26:04.201 ], 00:26:04.201 "max_cntlid": 65519, 00:26:04.201 "max_namespaces": 1, 00:26:04.201 "min_cntlid": 1, 00:26:04.201 "model_number": "SPDK bdev Controller", 00:26:04.201 "namespaces": [ 00:26:04.201 { 00:26:04.201 "bdev_name": "Nvme0n1", 00:26:04.201 "name": "Nvme0n1", 00:26:04.201 "nguid": "9D3074BD2B79496B84006542C9597AC8", 00:26:04.201 "nsid": 1, 00:26:04.201 "uuid": "9d3074bd-2b79-496b-8400-6542c9597ac8" 00:26:04.201 } 00:26:04.201 ], 00:26:04.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.201 "serial_number": "SPDK00000000000001", 00:26:04.201 "subtype": "NVMe" 00:26:04.201 } 00:26:04.201 ] 00:26:04.201 14:32:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.201 14:32:09 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:04.201 14:32:09 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:04.201 14:32:09 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:04.201 14:32:09 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:26:04.201 14:32:09 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:04.201 14:32:09 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:04.201 14:32:09 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:04.460 14:32:10 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:26:04.460 14:32:10 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:26:04.460 14:32:10 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:26:04.460 14:32:10 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.460 14:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.460 14:32:10 -- common/autotest_common.sh@10 -- # set +x 00:26:04.460 14:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.460 14:32:10 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:04.460 14:32:10 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:04.460 14:32:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:04.460 14:32:10 -- nvmf/common.sh@116 -- # sync 00:26:04.720 14:32:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:04.720 14:32:10 -- nvmf/common.sh@119 -- # set +e 00:26:04.720 14:32:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:04.720 14:32:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:04.720 rmmod nvme_tcp 00:26:04.720 rmmod nvme_fabrics 00:26:04.720 rmmod nvme_keyring 00:26:04.720 14:32:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:04.720 14:32:10 -- nvmf/common.sh@123 -- # set -e 00:26:04.720 14:32:10 -- nvmf/common.sh@124 -- # return 0 00:26:04.720 14:32:10 -- nvmf/common.sh@477 -- # '[' -n 101813 ']' 00:26:04.720 14:32:10 -- nvmf/common.sh@478 -- # killprocess 101813 00:26:04.720 14:32:10 -- common/autotest_common.sh@936 -- # '[' -z 101813 ']' 00:26:04.720 14:32:10 -- common/autotest_common.sh@940 -- # kill -0 101813 00:26:04.720 14:32:10 -- common/autotest_common.sh@941 -- # uname 00:26:04.720 14:32:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.720 14:32:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101813 00:26:04.720 14:32:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:04.720 killing process with pid 101813 00:26:04.720 14:32:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:04.720 14:32:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101813' 00:26:04.720 14:32:10 -- common/autotest_common.sh@955 -- # kill 101813 00:26:04.720 [2024-12-05 14:32:10.194523] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:04.720 14:32:10 -- common/autotest_common.sh@960 -- # wait 101813 00:26:04.979 14:32:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:04.979 14:32:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:04.979 14:32:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:04.979 14:32:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.979 14:32:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:04.979 14:32:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.979 14:32:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:04.979 14:32:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.979 14:32:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:04.979 00:26:04.979 real 0m2.418s 00:26:04.979 user 0m4.729s 00:26:04.979 sys 0m0.807s 00:26:04.979 14:32:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:04.979 ************************************ 00:26:04.979 14:32:10 -- common/autotest_common.sh@10 -- # set +x 00:26:04.979 END TEST nvmf_identify_passthru 00:26:04.979 ************************************ 00:26:04.979 14:32:10 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:04.979 14:32:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:04.979 14:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:04.979 14:32:10 -- common/autotest_common.sh@10 -- # set +x 00:26:04.979 ************************************ 00:26:04.979 START TEST nvmf_dif 00:26:04.979 ************************************ 00:26:04.979 14:32:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:04.979 * Looking for test storage... 00:26:04.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:04.979 14:32:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:04.979 14:32:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:04.979 14:32:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:05.239 14:32:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:05.239 14:32:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:05.239 14:32:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:05.239 14:32:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:05.239 14:32:10 -- scripts/common.sh@335 -- # IFS=.-: 00:26:05.239 14:32:10 -- scripts/common.sh@335 -- # read -ra ver1 00:26:05.239 14:32:10 -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.239 14:32:10 -- scripts/common.sh@336 -- # read -ra ver2 00:26:05.239 14:32:10 -- scripts/common.sh@337 -- # local 'op=<' 00:26:05.239 14:32:10 -- scripts/common.sh@339 -- # ver1_l=2 00:26:05.239 14:32:10 -- scripts/common.sh@340 -- # ver2_l=1 00:26:05.239 14:32:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:05.239 14:32:10 -- scripts/common.sh@343 -- # case "$op" in 00:26:05.239 14:32:10 -- scripts/common.sh@344 -- # : 1 00:26:05.239 14:32:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:05.239 14:32:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.239 14:32:10 -- scripts/common.sh@364 -- # decimal 1 00:26:05.239 14:32:10 -- scripts/common.sh@352 -- # local d=1 00:26:05.239 14:32:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.239 14:32:10 -- scripts/common.sh@354 -- # echo 1 00:26:05.239 14:32:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:05.239 14:32:10 -- scripts/common.sh@365 -- # decimal 2 00:26:05.239 14:32:10 -- scripts/common.sh@352 -- # local d=2 00:26:05.239 14:32:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.239 14:32:10 -- scripts/common.sh@354 -- # echo 2 00:26:05.239 14:32:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:05.239 14:32:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:05.239 14:32:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:05.239 14:32:10 -- scripts/common.sh@367 -- # return 0 00:26:05.239 14:32:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.239 14:32:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.239 --rc genhtml_branch_coverage=1 00:26:05.239 --rc genhtml_function_coverage=1 00:26:05.239 --rc genhtml_legend=1 00:26:05.239 --rc geninfo_all_blocks=1 00:26:05.239 --rc geninfo_unexecuted_blocks=1 00:26:05.239 00:26:05.239 ' 00:26:05.239 14:32:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.239 --rc genhtml_branch_coverage=1 00:26:05.239 --rc genhtml_function_coverage=1 00:26:05.239 --rc genhtml_legend=1 00:26:05.239 --rc geninfo_all_blocks=1 00:26:05.239 --rc geninfo_unexecuted_blocks=1 00:26:05.239 00:26:05.239 ' 00:26:05.239 14:32:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.239 --rc genhtml_branch_coverage=1 00:26:05.239 --rc genhtml_function_coverage=1 00:26:05.239 --rc genhtml_legend=1 00:26:05.239 --rc geninfo_all_blocks=1 00:26:05.239 --rc geninfo_unexecuted_blocks=1 00:26:05.239 00:26:05.239 ' 00:26:05.239 14:32:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.239 --rc genhtml_branch_coverage=1 00:26:05.239 --rc genhtml_function_coverage=1 00:26:05.239 --rc genhtml_legend=1 00:26:05.239 --rc geninfo_all_blocks=1 00:26:05.239 --rc geninfo_unexecuted_blocks=1 00:26:05.239 00:26:05.239 ' 00:26:05.239 14:32:10 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:05.239 14:32:10 -- nvmf/common.sh@7 -- # uname -s 00:26:05.239 14:32:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.239 14:32:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.239 14:32:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.239 14:32:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.239 14:32:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.239 14:32:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.239 14:32:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.239 14:32:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.239 14:32:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.239 14:32:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.239 14:32:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:26:05.239 14:32:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:26:05.239 14:32:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.239 14:32:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.239 14:32:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:05.239 14:32:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.239 14:32:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.239 14:32:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.239 14:32:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.239 14:32:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.239 14:32:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.239 14:32:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.239 14:32:10 -- paths/export.sh@5 -- # export PATH 00:26:05.239 14:32:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.239 14:32:10 -- nvmf/common.sh@46 -- # : 0 00:26:05.239 14:32:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:05.239 14:32:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:05.239 14:32:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:05.239 14:32:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.239 14:32:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.239 14:32:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:05.239 14:32:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:05.239 14:32:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:05.239 14:32:10 -- target/dif.sh@15 -- # NULL_META=16 00:26:05.239 14:32:10 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:05.239 14:32:10 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:05.239 14:32:10 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:05.239 14:32:10 -- target/dif.sh@135 -- # nvmftestinit 00:26:05.239 14:32:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:05.239 14:32:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.239 14:32:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:05.239 14:32:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:05.239 14:32:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:05.239 14:32:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.239 14:32:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:05.239 14:32:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.239 14:32:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:05.239 14:32:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:05.239 14:32:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:05.239 14:32:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:05.239 14:32:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:05.239 14:32:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:05.239 14:32:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.239 14:32:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.239 14:32:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:05.239 14:32:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:05.239 14:32:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:05.239 14:32:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:05.239 14:32:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:05.239 14:32:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.239 14:32:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:05.239 14:32:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:05.239 14:32:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:05.239 14:32:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:05.239 14:32:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:05.239 14:32:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:05.239 Cannot find device "nvmf_tgt_br" 00:26:05.239 14:32:10 -- nvmf/common.sh@154 -- # true 00:26:05.239 14:32:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.240 Cannot find device "nvmf_tgt_br2" 00:26:05.240 14:32:10 -- nvmf/common.sh@155 -- # true 00:26:05.240 14:32:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:05.240 14:32:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:05.240 Cannot find device "nvmf_tgt_br" 00:26:05.240 14:32:10 -- nvmf/common.sh@157 -- # true 00:26:05.240 14:32:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:05.240 Cannot find device "nvmf_tgt_br2" 00:26:05.240 14:32:10 -- nvmf/common.sh@158 -- # true 00:26:05.240 14:32:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:05.240 14:32:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:05.240 14:32:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.240 14:32:10 -- nvmf/common.sh@161 -- # true 00:26:05.240 14:32:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.240 14:32:10 -- nvmf/common.sh@162 -- # true 00:26:05.240 14:32:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:05.240 14:32:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:05.240 14:32:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:05.240 14:32:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:05.240 14:32:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:05.499 14:32:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:05.499 14:32:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:05.499 14:32:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:05.499 14:32:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:05.499 14:32:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:05.499 14:32:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:05.499 14:32:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:05.499 14:32:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:05.499 14:32:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:05.499 14:32:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:05.499 14:32:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:05.499 14:32:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:05.499 14:32:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:05.499 14:32:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:05.499 14:32:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:05.499 14:32:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:05.499 14:32:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:05.499 14:32:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:05.499 14:32:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:05.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:26:05.499 00:26:05.499 --- 10.0.0.2 ping statistics --- 00:26:05.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.499 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:05.499 14:32:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:05.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:05.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:26:05.499 00:26:05.499 --- 10.0.0.3 ping statistics --- 00:26:05.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.499 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:05.499 14:32:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:05.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:26:05.499 00:26:05.499 --- 10.0.0.1 ping statistics --- 00:26:05.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.499 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:05.499 14:32:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.499 14:32:11 -- nvmf/common.sh@421 -- # return 0 00:26:05.499 14:32:11 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:05.499 14:32:11 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:05.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:05.757 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:05.757 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:06.016 14:32:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.016 14:32:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:06.016 14:32:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:06.016 14:32:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.016 14:32:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:06.016 14:32:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:06.016 14:32:11 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:06.016 14:32:11 -- target/dif.sh@137 -- # nvmfappstart 00:26:06.016 14:32:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:06.016 14:32:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:06.016 14:32:11 -- common/autotest_common.sh@10 -- # set +x 00:26:06.017 14:32:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:06.017 14:32:11 -- nvmf/common.sh@469 -- # nvmfpid=102155 00:26:06.017 14:32:11 -- nvmf/common.sh@470 -- # waitforlisten 102155 00:26:06.017 14:32:11 -- common/autotest_common.sh@829 -- # '[' -z 102155 ']' 00:26:06.017 14:32:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.017 14:32:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.017 14:32:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.017 14:32:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.017 14:32:11 -- common/autotest_common.sh@10 -- # set +x 00:26:06.017 [2024-12-05 14:32:11.510316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:06.017 [2024-12-05 14:32:11.510401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.017 [2024-12-05 14:32:11.652965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.276 [2024-12-05 14:32:11.742399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:06.276 [2024-12-05 14:32:11.742582] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.276 [2024-12-05 14:32:11.742599] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.276 [2024-12-05 14:32:11.742611] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.276 [2024-12-05 14:32:11.742655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.845 14:32:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.845 14:32:12 -- common/autotest_common.sh@862 -- # return 0 00:26:06.845 14:32:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:06.845 14:32:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:06.845 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.845 14:32:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.845 14:32:12 -- target/dif.sh@139 -- # create_transport 00:26:06.845 14:32:12 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:06.845 14:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.846 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.846 [2024-12-05 14:32:12.427924] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.846 14:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.846 14:32:12 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:06.846 14:32:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:06.846 14:32:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.846 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.846 ************************************ 00:26:06.846 START TEST fio_dif_1_default 00:26:06.846 ************************************ 00:26:06.846 14:32:12 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:26:06.846 14:32:12 -- target/dif.sh@86 -- # create_subsystems 0 00:26:06.846 14:32:12 -- target/dif.sh@28 -- # local sub 00:26:06.846 14:32:12 -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.846 14:32:12 -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.846 14:32:12 -- target/dif.sh@18 -- # local sub_id=0 00:26:06.846 14:32:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:06.846 14:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.846 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.846 bdev_null0 00:26:06.846 14:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.846 14:32:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.846 14:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.846 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.846 14:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.846 14:32:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.846 14:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.846 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.846 14:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.846 14:32:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.846 14:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.846 14:32:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.846 [2024-12-05 14:32:12.488124] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.105 14:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.105 14:32:12 -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:07.105 14:32:12 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:07.105 14:32:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:07.105 14:32:12 -- nvmf/common.sh@520 -- # config=() 00:26:07.105 14:32:12 -- nvmf/common.sh@520 -- # local subsystem config 00:26:07.105 14:32:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.105 14:32:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.105 { 00:26:07.105 "params": { 00:26:07.105 "name": "Nvme$subsystem", 00:26:07.105 "trtype": "$TEST_TRANSPORT", 00:26:07.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.105 "adrfam": "ipv4", 00:26:07.105 "trsvcid": "$NVMF_PORT", 00:26:07.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.105 "hdgst": ${hdgst:-false}, 00:26:07.105 "ddgst": ${ddgst:-false} 00:26:07.105 }, 00:26:07.105 "method": "bdev_nvme_attach_controller" 00:26:07.105 } 00:26:07.105 EOF 00:26:07.105 )") 00:26:07.105 14:32:12 -- target/dif.sh@82 -- # gen_fio_conf 00:26:07.105 14:32:12 -- target/dif.sh@54 -- # local file 00:26:07.105 14:32:12 -- target/dif.sh@56 -- # cat 00:26:07.105 14:32:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.105 14:32:12 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.105 14:32:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:07.105 14:32:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:07.105 14:32:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:07.106 14:32:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.106 14:32:12 -- nvmf/common.sh@542 -- # cat 00:26:07.106 14:32:12 -- common/autotest_common.sh@1330 -- # shift 00:26:07.106 14:32:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:07.106 14:32:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.106 14:32:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.106 14:32:12 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:07.106 14:32:12 -- nvmf/common.sh@544 -- # jq . 00:26:07.106 14:32:12 -- nvmf/common.sh@545 -- # IFS=, 00:26:07.106 14:32:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:07.106 "params": { 00:26:07.106 "name": "Nvme0", 00:26:07.106 "trtype": "tcp", 00:26:07.106 "traddr": "10.0.0.2", 00:26:07.106 "adrfam": "ipv4", 00:26:07.106 "trsvcid": "4420", 00:26:07.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:07.106 "hdgst": false, 00:26:07.106 "ddgst": false 00:26:07.106 }, 00:26:07.106 "method": "bdev_nvme_attach_controller" 00:26:07.106 }' 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:07.106 14:32:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:07.106 14:32:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:07.106 14:32:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:07.106 14:32:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:07.106 14:32:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:07.106 14:32:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.106 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:07.106 fio-3.35 00:26:07.106 Starting 1 thread 00:26:07.674 [2024-12-05 14:32:13.144389] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:07.674 [2024-12-05 14:32:13.144472] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:17.646 00:26:17.646 filename0: (groupid=0, jobs=1): err= 0: pid=102240: Thu Dec 5 14:32:23 2024 00:26:17.646 read: IOPS=2014, BW=8058KiB/s (8252kB/s)(78.7MiB/10001msec) 00:26:17.646 slat (nsec): min=5760, max=57550, avg=6897.80, stdev=2375.07 00:26:17.647 clat (usec): min=337, max=42321, avg=1964.90, stdev=7875.08 00:26:17.647 lat (usec): min=343, max=42330, avg=1971.80, stdev=7875.13 00:26:17.647 clat percentiles (usec): 00:26:17.647 | 1.00th=[ 343], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:26:17.647 | 30.00th=[ 363], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:26:17.647 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 465], 00:26:17.647 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:26:17.647 | 99.99th=[42206] 00:26:17.647 bw ( KiB/s): min= 4192, max=13152, per=100.00%, avg=8104.42, stdev=2136.85, samples=19 00:26:17.647 iops : min= 1048, max= 3288, avg=2026.11, stdev=534.21, samples=19 00:26:17.647 lat (usec) : 500=95.80%, 750=0.27% 00:26:17.647 lat (msec) : 4=0.02%, 50=3.91% 00:26:17.647 cpu : usr=91.49%, sys=7.58%, ctx=24, majf=0, minf=0 00:26:17.647 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 issued rwts: total=20148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:17.647 00:26:17.647 Run status group 0 (all jobs): 00:26:17.647 READ: bw=8058KiB/s (8252kB/s), 8058KiB/s-8058KiB/s (8252kB/s-8252kB/s), io=78.7MiB (82.5MB), run=10001-10001msec 00:26:17.905 14:32:23 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:17.905 14:32:23 -- target/dif.sh@43 -- # local sub 00:26:17.905 14:32:23 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.905 14:32:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.905 14:32:23 -- target/dif.sh@36 -- # local sub_id=0 00:26:17.905 14:32:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.905 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.905 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:17.905 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.905 14:32:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.905 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.905 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:17.905 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.905 00:26:17.905 real 0m11.051s 00:26:17.905 user 0m9.800s 00:26:17.905 sys 0m1.039s 00:26:17.905 14:32:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:17.905 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:17.905 ************************************ 00:26:17.905 END TEST fio_dif_1_default 00:26:17.905 ************************************ 00:26:17.905 14:32:23 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:17.905 14:32:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:17.905 14:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.905 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:17.905 ************************************ 00:26:17.905 START TEST fio_dif_1_multi_subsystems 00:26:17.905 ************************************ 00:26:17.905 14:32:23 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:26:17.905 14:32:23 -- target/dif.sh@92 -- # local files=1 00:26:17.905 14:32:23 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:18.163 14:32:23 -- target/dif.sh@28 -- # local sub 00:26:18.163 14:32:23 -- target/dif.sh@30 -- # for sub in "$@" 00:26:18.163 14:32:23 -- target/dif.sh@31 -- # create_subsystem 0 00:26:18.163 14:32:23 -- target/dif.sh@18 -- # local sub_id=0 00:26:18.163 14:32:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:18.163 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.163 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.163 bdev_null0 00:26:18.163 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.163 14:32:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 [2024-12-05 14:32:23.577649] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@30 -- # for sub in "$@" 00:26:18.164 14:32:23 -- target/dif.sh@31 -- # create_subsystem 1 00:26:18.164 14:32:23 -- target/dif.sh@18 -- # local sub_id=1 00:26:18.164 14:32:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 bdev_null1 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.164 14:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.164 14:32:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.164 14:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.164 14:32:23 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:18.164 14:32:23 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:18.164 14:32:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:18.164 14:32:23 -- nvmf/common.sh@520 -- # config=() 00:26:18.164 14:32:23 -- nvmf/common.sh@520 -- # local subsystem config 00:26:18.164 14:32:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:18.164 14:32:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.164 14:32:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:18.164 { 00:26:18.164 "params": { 00:26:18.164 "name": "Nvme$subsystem", 00:26:18.164 "trtype": "$TEST_TRANSPORT", 00:26:18.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.164 "adrfam": "ipv4", 00:26:18.164 "trsvcid": "$NVMF_PORT", 00:26:18.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.164 "hdgst": ${hdgst:-false}, 00:26:18.164 "ddgst": ${ddgst:-false} 00:26:18.164 }, 00:26:18.164 "method": "bdev_nvme_attach_controller" 00:26:18.164 } 00:26:18.164 EOF 00:26:18.164 )") 00:26:18.164 14:32:23 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.164 14:32:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:18.164 14:32:23 -- target/dif.sh@82 -- # gen_fio_conf 00:26:18.164 14:32:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:18.164 14:32:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:18.164 14:32:23 -- target/dif.sh@54 -- # local file 00:26:18.164 14:32:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.164 14:32:23 -- target/dif.sh@56 -- # cat 00:26:18.164 14:32:23 -- common/autotest_common.sh@1330 -- # shift 00:26:18.164 14:32:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:18.164 14:32:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.164 14:32:23 -- nvmf/common.sh@542 -- # cat 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:18.164 14:32:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:18.164 14:32:23 -- target/dif.sh@72 -- # (( file <= files )) 00:26:18.164 14:32:23 -- target/dif.sh@73 -- # cat 00:26:18.164 14:32:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:18.164 14:32:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:18.164 { 00:26:18.164 "params": { 00:26:18.164 "name": "Nvme$subsystem", 00:26:18.164 "trtype": "$TEST_TRANSPORT", 00:26:18.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.164 "adrfam": "ipv4", 00:26:18.164 "trsvcid": "$NVMF_PORT", 00:26:18.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.164 "hdgst": ${hdgst:-false}, 00:26:18.164 "ddgst": ${ddgst:-false} 00:26:18.164 }, 00:26:18.164 "method": "bdev_nvme_attach_controller" 00:26:18.164 } 00:26:18.164 EOF 00:26:18.164 )") 00:26:18.164 14:32:23 -- target/dif.sh@72 -- # (( file++ )) 00:26:18.164 14:32:23 -- target/dif.sh@72 -- # (( file <= files )) 00:26:18.164 14:32:23 -- nvmf/common.sh@542 -- # cat 00:26:18.164 14:32:23 -- nvmf/common.sh@544 -- # jq . 00:26:18.164 14:32:23 -- nvmf/common.sh@545 -- # IFS=, 00:26:18.164 14:32:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:18.164 "params": { 00:26:18.164 "name": "Nvme0", 00:26:18.164 "trtype": "tcp", 00:26:18.164 "traddr": "10.0.0.2", 00:26:18.164 "adrfam": "ipv4", 00:26:18.164 "trsvcid": "4420", 00:26:18.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:18.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:18.164 "hdgst": false, 00:26:18.164 "ddgst": false 00:26:18.164 }, 00:26:18.164 "method": "bdev_nvme_attach_controller" 00:26:18.164 },{ 00:26:18.164 "params": { 00:26:18.164 "name": "Nvme1", 00:26:18.164 "trtype": "tcp", 00:26:18.164 "traddr": "10.0.0.2", 00:26:18.164 "adrfam": "ipv4", 00:26:18.164 "trsvcid": "4420", 00:26:18.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.164 "hdgst": false, 00:26:18.164 "ddgst": false 00:26:18.164 }, 00:26:18.164 "method": "bdev_nvme_attach_controller" 00:26:18.164 }' 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:18.164 14:32:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:18.164 14:32:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:18.164 14:32:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:18.164 14:32:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:18.164 14:32:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:18.164 14:32:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:18.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:18.422 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:18.422 fio-3.35 00:26:18.422 Starting 2 threads 00:26:18.989 [2024-12-05 14:32:24.358189] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:18.989 [2024-12-05 14:32:24.358257] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:28.956 00:26:28.956 filename0: (groupid=0, jobs=1): err= 0: pid=102401: Thu Dec 5 14:32:34 2024 00:26:28.956 read: IOPS=191, BW=766KiB/s (784kB/s)(7664KiB/10011msec) 00:26:28.956 slat (nsec): min=6092, max=38475, avg=9398.09, stdev=5258.97 00:26:28.956 clat (usec): min=346, max=41880, avg=20869.95, stdev=20238.80 00:26:28.956 lat (usec): min=353, max=41904, avg=20879.35, stdev=20238.76 00:26:28.956 clat percentiles (usec): 00:26:28.956 | 1.00th=[ 355], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 396], 00:26:28.956 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[40633], 60.00th=[40633], 00:26:28.956 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:28.956 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:28.956 | 99.99th=[41681] 00:26:28.956 bw ( KiB/s): min= 480, max= 1088, per=50.96%, avg=764.85, stdev=151.88, samples=20 00:26:28.956 iops : min= 120, max= 272, avg=191.20, stdev=37.96, samples=20 00:26:28.956 lat (usec) : 500=46.14%, 750=1.98%, 1000=1.15% 00:26:28.956 lat (msec) : 2=0.21%, 50=50.52% 00:26:28.956 cpu : usr=97.55%, sys=2.09%, ctx=17, majf=0, minf=7 00:26:28.956 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.956 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.956 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:28.956 filename1: (groupid=0, jobs=1): err= 0: pid=102402: Thu Dec 5 14:32:34 2024 00:26:28.956 read: IOPS=183, BW=734KiB/s (752kB/s)(7344KiB/10004msec) 00:26:28.956 slat (nsec): min=6081, max=55519, avg=9198.01, stdev=5395.57 00:26:28.956 clat (usec): min=355, max=42410, avg=21764.90, stdev=20198.33 00:26:28.956 lat (usec): min=361, max=42420, avg=21774.10, stdev=20198.23 00:26:28.956 clat percentiles (usec): 00:26:28.956 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 396], 20.00th=[ 412], 00:26:28.956 | 30.00th=[ 429], 40.00th=[ 465], 50.00th=[40633], 60.00th=[40633], 00:26:28.956 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:28.956 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:28.956 | 99.99th=[42206] 00:26:28.956 bw ( KiB/s): min= 480, max= 1088, per=48.69%, avg=730.95, stdev=155.04, samples=19 00:26:28.956 iops : min= 120, max= 272, avg=182.74, stdev=38.76, samples=19 00:26:28.956 lat (usec) : 500=43.14%, 750=2.72%, 1000=1.20% 00:26:28.956 lat (msec) : 2=0.22%, 50=52.72% 00:26:28.956 cpu : usr=97.79%, sys=1.81%, ctx=9, majf=0, minf=0 00:26:28.956 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.956 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.956 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:28.956 00:26:28.956 Run status group 0 (all jobs): 00:26:28.956 READ: bw=1499KiB/s (1535kB/s), 734KiB/s-766KiB/s (752kB/s-784kB/s), io=14.7MiB (15.4MB), run=10004-10011msec 00:26:29.215 14:32:34 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:29.215 14:32:34 -- target/dif.sh@43 -- # local sub 00:26:29.215 14:32:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.215 14:32:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:29.215 14:32:34 -- target/dif.sh@36 -- # local sub_id=0 00:26:29.215 14:32:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.215 14:32:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:29.215 14:32:34 -- target/dif.sh@36 -- # local sub_id=1 00:26:29.215 14:32:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 ************************************ 00:26:29.215 END TEST fio_dif_1_multi_subsystems 00:26:29.215 ************************************ 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 00:26:29.215 real 0m11.192s 00:26:29.215 user 0m20.360s 00:26:29.215 sys 0m0.682s 00:26:29.215 14:32:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 14:32:34 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:29.215 14:32:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:29.215 14:32:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 ************************************ 00:26:29.215 START TEST fio_dif_rand_params 00:26:29.215 ************************************ 00:26:29.215 14:32:34 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:29.215 14:32:34 -- target/dif.sh@100 -- # local NULL_DIF 00:26:29.215 14:32:34 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:29.215 14:32:34 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:29.215 14:32:34 -- target/dif.sh@103 -- # bs=128k 00:26:29.215 14:32:34 -- target/dif.sh@103 -- # numjobs=3 00:26:29.215 14:32:34 -- target/dif.sh@103 -- # iodepth=3 00:26:29.215 14:32:34 -- target/dif.sh@103 -- # runtime=5 00:26:29.215 14:32:34 -- target/dif.sh@105 -- # create_subsystems 0 00:26:29.215 14:32:34 -- target/dif.sh@28 -- # local sub 00:26:29.215 14:32:34 -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.215 14:32:34 -- target/dif.sh@31 -- # create_subsystem 0 00:26:29.215 14:32:34 -- target/dif.sh@18 -- # local sub_id=0 00:26:29.215 14:32:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 bdev_null0 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.215 14:32:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.215 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:26:29.215 [2024-12-05 14:32:34.836856] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.215 14:32:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.215 14:32:34 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:29.215 14:32:34 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:29.215 14:32:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:29.215 14:32:34 -- nvmf/common.sh@520 -- # config=() 00:26:29.215 14:32:34 -- nvmf/common.sh@520 -- # local subsystem config 00:26:29.215 14:32:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:29.215 14:32:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:29.215 { 00:26:29.215 "params": { 00:26:29.215 "name": "Nvme$subsystem", 00:26:29.215 "trtype": "$TEST_TRANSPORT", 00:26:29.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.215 "adrfam": "ipv4", 00:26:29.215 "trsvcid": "$NVMF_PORT", 00:26:29.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.216 "hdgst": ${hdgst:-false}, 00:26:29.216 "ddgst": ${ddgst:-false} 00:26:29.216 }, 00:26:29.216 "method": "bdev_nvme_attach_controller" 00:26:29.216 } 00:26:29.216 EOF 00:26:29.216 )") 00:26:29.216 14:32:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.216 14:32:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.216 14:32:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:29.216 14:32:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:29.216 14:32:34 -- target/dif.sh@82 -- # gen_fio_conf 00:26:29.216 14:32:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:29.216 14:32:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:29.216 14:32:34 -- common/autotest_common.sh@1330 -- # shift 00:26:29.216 14:32:34 -- target/dif.sh@54 -- # local file 00:26:29.216 14:32:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:29.216 14:32:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.216 14:32:34 -- target/dif.sh@56 -- # cat 00:26:29.216 14:32:34 -- nvmf/common.sh@542 -- # cat 00:26:29.216 14:32:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:29.216 14:32:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:29.216 14:32:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:29.216 14:32:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:29.216 14:32:34 -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.216 14:32:34 -- nvmf/common.sh@544 -- # jq . 00:26:29.216 14:32:34 -- nvmf/common.sh@545 -- # IFS=, 00:26:29.216 14:32:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:29.216 "params": { 00:26:29.216 "name": "Nvme0", 00:26:29.216 "trtype": "tcp", 00:26:29.216 "traddr": "10.0.0.2", 00:26:29.216 "adrfam": "ipv4", 00:26:29.216 "trsvcid": "4420", 00:26:29.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.216 "hdgst": false, 00:26:29.216 "ddgst": false 00:26:29.216 }, 00:26:29.216 "method": "bdev_nvme_attach_controller" 00:26:29.216 }' 00:26:29.475 14:32:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:29.475 14:32:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:29.475 14:32:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.475 14:32:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:29.475 14:32:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:29.475 14:32:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:29.475 14:32:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:29.475 14:32:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:29.475 14:32:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:29.475 14:32:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.475 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:29.475 ... 00:26:29.475 fio-3.35 00:26:29.475 Starting 3 threads 00:26:30.039 [2024-12-05 14:32:35.475511] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:30.039 [2024-12-05 14:32:35.475574] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:35.310 00:26:35.310 filename0: (groupid=0, jobs=1): err= 0: pid=102562: Thu Dec 5 14:32:40 2024 00:26:35.310 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(127MiB/5003msec) 00:26:35.310 slat (nsec): min=5224, max=67547, avg=15050.15, stdev=7106.28 00:26:35.310 clat (usec): min=4379, max=57409, avg=14718.00, stdev=14743.29 00:26:35.310 lat (usec): min=4389, max=57450, avg=14733.05, stdev=14743.22 00:26:35.310 clat percentiles (usec): 00:26:35.310 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7439], 00:26:35.310 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:26:35.310 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[49021], 95.00th=[50070], 00:26:35.310 | 99.00th=[51119], 99.50th=[51643], 99.90th=[56886], 99.95th=[57410], 00:26:35.310 | 99.99th=[57410] 00:26:35.310 bw ( KiB/s): min=15360, max=39168, per=23.37%, avg=24775.11, stdev=7770.96, samples=9 00:26:35.310 iops : min= 120, max= 306, avg=193.56, stdev=60.71, samples=9 00:26:35.310 lat (msec) : 10=75.83%, 20=9.14%, 50=9.82%, 100=5.21% 00:26:35.310 cpu : usr=94.96%, sys=3.68%, ctx=7, majf=0, minf=0 00:26:35.310 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.310 issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:35.310 filename0: (groupid=0, jobs=1): err= 0: pid=102563: Thu Dec 5 14:32:40 2024 00:26:35.310 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(180MiB/5002msec) 00:26:35.310 slat (nsec): min=6010, max=65298, avg=11360.97, stdev=5551.98 00:26:35.310 clat (usec): min=3668, max=51577, avg=10411.84, stdev=9348.45 00:26:35.310 lat (usec): min=3674, max=51586, avg=10423.20, stdev=9348.69 00:26:35.310 clat percentiles (usec): 00:26:35.310 | 1.00th=[ 3720], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6390], 00:26:35.310 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 8291], 60.00th=[ 9634], 00:26:35.310 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[46400], 00:26:35.310 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:26:35.310 | 99.99th=[51643] 00:26:35.310 bw ( KiB/s): min=27136, max=46592, per=35.31%, avg=37432.89, stdev=6332.80, samples=9 00:26:35.310 iops : min= 212, max= 364, avg=292.44, stdev=49.48, samples=9 00:26:35.310 lat (msec) : 4=2.50%, 10=64.42%, 20=27.66%, 50=4.45%, 100=0.97% 00:26:35.310 cpu : usr=93.80%, sys=4.66%, ctx=6, majf=0, minf=0 00:26:35.310 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.310 issued rwts: total=1439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:35.310 filename0: (groupid=0, jobs=1): err= 0: pid=102564: Thu Dec 5 14:32:40 2024 00:26:35.310 read: IOPS=337, BW=42.2MiB/s (44.2MB/s)(211MiB/5002msec) 00:26:35.310 slat (nsec): min=5773, max=71939, avg=9933.97, stdev=5515.28 00:26:35.310 clat (usec): min=2025, max=53488, avg=8872.11, stdev=4296.45 00:26:35.310 lat (usec): min=2035, max=53494, avg=8882.04, stdev=4296.98 00:26:35.310 clat percentiles (usec): 00:26:35.310 | 1.00th=[ 3654], 5.00th=[ 3687], 10.00th=[ 3720], 20.00th=[ 5014], 00:26:35.310 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8979], 00:26:35.310 | 70.00th=[11207], 80.00th=[12125], 90.00th=[13042], 95.00th=[13566], 00:26:35.310 | 99.00th=[14615], 99.50th=[43779], 99.90th=[52691], 99.95th=[53740], 00:26:35.310 | 99.99th=[53740] 00:26:35.310 bw ( KiB/s): min=35328, max=57600, per=41.25%, avg=43730.67, stdev=8119.02, samples=9 00:26:35.310 iops : min= 276, max= 450, avg=341.56, stdev=63.32, samples=9 00:26:35.310 lat (msec) : 4=17.84%, 10=47.24%, 20=34.38%, 50=0.36%, 100=0.18% 00:26:35.310 cpu : usr=92.90%, sys=5.26%, ctx=69, majf=0, minf=9 00:26:35.310 IO depths : 1=26.0%, 2=74.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.310 issued rwts: total=1687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:35.310 00:26:35.310 Run status group 0 (all jobs): 00:26:35.310 READ: bw=104MiB/s (109MB/s), 25.4MiB/s-42.2MiB/s (26.7MB/s-44.2MB/s), io=518MiB (543MB), run=5002-5003msec 00:26:35.310 14:32:40 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:35.310 14:32:40 -- target/dif.sh@43 -- # local sub 00:26:35.310 14:32:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.310 14:32:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:35.310 14:32:40 -- target/dif.sh@36 -- # local sub_id=0 00:26:35.310 14:32:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:35.310 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.310 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.310 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.310 14:32:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:35.310 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.310 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.310 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.310 14:32:40 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:35.310 14:32:40 -- target/dif.sh@109 -- # bs=4k 00:26:35.310 14:32:40 -- target/dif.sh@109 -- # numjobs=8 00:26:35.310 14:32:40 -- target/dif.sh@109 -- # iodepth=16 00:26:35.310 14:32:40 -- target/dif.sh@109 -- # runtime= 00:26:35.310 14:32:40 -- target/dif.sh@109 -- # files=2 00:26:35.310 14:32:40 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:35.310 14:32:40 -- target/dif.sh@28 -- # local sub 00:26:35.310 14:32:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.310 14:32:40 -- target/dif.sh@31 -- # create_subsystem 0 00:26:35.310 14:32:40 -- target/dif.sh@18 -- # local sub_id=0 00:26:35.310 14:32:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:35.310 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 bdev_null0 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 [2024-12-05 14:32:40.898344] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.311 14:32:40 -- target/dif.sh@31 -- # create_subsystem 1 00:26:35.311 14:32:40 -- target/dif.sh@18 -- # local sub_id=1 00:26:35.311 14:32:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 bdev_null1 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.311 14:32:40 -- target/dif.sh@31 -- # create_subsystem 2 00:26:35.311 14:32:40 -- target/dif.sh@18 -- # local sub_id=2 00:26:35.311 14:32:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 bdev_null2 00:26:35.311 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.311 14:32:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:35.311 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.311 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.571 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.571 14:32:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:35.571 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.571 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.571 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.571 14:32:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:35.571 14:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.571 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:26:35.571 14:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.571 14:32:40 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:35.571 14:32:40 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:35.571 14:32:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:35.571 14:32:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.571 14:32:40 -- nvmf/common.sh@520 -- # config=() 00:26:35.571 14:32:40 -- target/dif.sh@82 -- # gen_fio_conf 00:26:35.571 14:32:40 -- nvmf/common.sh@520 -- # local subsystem config 00:26:35.571 14:32:40 -- target/dif.sh@54 -- # local file 00:26:35.571 14:32:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.571 14:32:40 -- target/dif.sh@56 -- # cat 00:26:35.571 14:32:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:35.571 14:32:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:35.571 14:32:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:35.571 { 00:26:35.571 "params": { 00:26:35.571 "name": "Nvme$subsystem", 00:26:35.571 "trtype": "$TEST_TRANSPORT", 00:26:35.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.571 "adrfam": "ipv4", 00:26:35.571 "trsvcid": "$NVMF_PORT", 00:26:35.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.571 "hdgst": ${hdgst:-false}, 00:26:35.571 "ddgst": ${ddgst:-false} 00:26:35.571 }, 00:26:35.571 "method": "bdev_nvme_attach_controller" 00:26:35.571 } 00:26:35.571 EOF 00:26:35.571 )") 00:26:35.571 14:32:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.571 14:32:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:35.571 14:32:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:35.571 14:32:40 -- common/autotest_common.sh@1330 -- # shift 00:26:35.571 14:32:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:35.571 14:32:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.571 14:32:40 -- nvmf/common.sh@542 -- # cat 00:26:35.571 14:32:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:35.571 14:32:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:35.571 14:32:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.571 14:32:40 -- target/dif.sh@73 -- # cat 00:26:35.571 14:32:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:35.571 14:32:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:35.571 14:32:40 -- target/dif.sh@72 -- # (( file++ )) 00:26:35.571 14:32:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.571 14:32:40 -- target/dif.sh@73 -- # cat 00:26:35.571 14:32:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:35.571 14:32:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:35.571 { 00:26:35.571 "params": { 00:26:35.571 "name": "Nvme$subsystem", 00:26:35.571 "trtype": "$TEST_TRANSPORT", 00:26:35.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.571 "adrfam": "ipv4", 00:26:35.571 "trsvcid": "$NVMF_PORT", 00:26:35.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.571 "hdgst": ${hdgst:-false}, 00:26:35.571 "ddgst": ${ddgst:-false} 00:26:35.571 }, 00:26:35.572 "method": "bdev_nvme_attach_controller" 00:26:35.572 } 00:26:35.572 EOF 00:26:35.572 )") 00:26:35.572 14:32:40 -- nvmf/common.sh@542 -- # cat 00:26:35.572 14:32:40 -- target/dif.sh@72 -- # (( file++ )) 00:26:35.572 14:32:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.572 14:32:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:35.572 14:32:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:35.572 { 00:26:35.572 "params": { 00:26:35.572 "name": "Nvme$subsystem", 00:26:35.572 "trtype": "$TEST_TRANSPORT", 00:26:35.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.572 "adrfam": "ipv4", 00:26:35.572 "trsvcid": "$NVMF_PORT", 00:26:35.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.572 "hdgst": ${hdgst:-false}, 00:26:35.572 "ddgst": ${ddgst:-false} 00:26:35.572 }, 00:26:35.572 "method": "bdev_nvme_attach_controller" 00:26:35.572 } 00:26:35.572 EOF 00:26:35.572 )") 00:26:35.572 14:32:40 -- nvmf/common.sh@542 -- # cat 00:26:35.572 14:32:41 -- nvmf/common.sh@544 -- # jq . 00:26:35.572 14:32:41 -- nvmf/common.sh@545 -- # IFS=, 00:26:35.572 14:32:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:35.572 "params": { 00:26:35.572 "name": "Nvme0", 00:26:35.572 "trtype": "tcp", 00:26:35.572 "traddr": "10.0.0.2", 00:26:35.572 "adrfam": "ipv4", 00:26:35.572 "trsvcid": "4420", 00:26:35.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:35.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:35.572 "hdgst": false, 00:26:35.572 "ddgst": false 00:26:35.572 }, 00:26:35.572 "method": "bdev_nvme_attach_controller" 00:26:35.572 },{ 00:26:35.572 "params": { 00:26:35.572 "name": "Nvme1", 00:26:35.572 "trtype": "tcp", 00:26:35.572 "traddr": "10.0.0.2", 00:26:35.572 "adrfam": "ipv4", 00:26:35.572 "trsvcid": "4420", 00:26:35.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.572 "hdgst": false, 00:26:35.572 "ddgst": false 00:26:35.572 }, 00:26:35.572 "method": "bdev_nvme_attach_controller" 00:26:35.572 },{ 00:26:35.572 "params": { 00:26:35.572 "name": "Nvme2", 00:26:35.572 "trtype": "tcp", 00:26:35.572 "traddr": "10.0.0.2", 00:26:35.572 "adrfam": "ipv4", 00:26:35.572 "trsvcid": "4420", 00:26:35.572 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:35.572 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:35.572 "hdgst": false, 00:26:35.572 "ddgst": false 00:26:35.572 }, 00:26:35.572 "method": "bdev_nvme_attach_controller" 00:26:35.572 }' 00:26:35.572 14:32:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:35.572 14:32:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:35.572 14:32:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.572 14:32:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:35.572 14:32:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:35.572 14:32:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:35.572 14:32:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:35.572 14:32:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:35.572 14:32:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:35.572 14:32:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:35.572 ... 00:26:35.572 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:35.572 ... 00:26:35.572 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:35.572 ... 00:26:35.572 fio-3.35 00:26:35.572 Starting 24 threads 00:26:36.510 [2024-12-05 14:32:41.814318] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:36.510 [2024-12-05 14:32:41.814372] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:46.530 00:26:46.530 filename0: (groupid=0, jobs=1): err= 0: pid=102663: Thu Dec 5 14:32:52 2024 00:26:46.530 read: IOPS=267, BW=1070KiB/s (1095kB/s)(10.5MiB/10088msec) 00:26:46.530 slat (usec): min=4, max=8041, avg=26.83, stdev=314.83 00:26:46.530 clat (msec): min=8, max=140, avg=59.58, stdev=21.33 00:26:46.530 lat (msec): min=8, max=140, avg=59.60, stdev=21.33 00:26:46.530 clat percentiles (msec): 00:26:46.530 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 43], 00:26:46.530 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 62], 00:26:46.530 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 102], 00:26:46.530 | 99.00th=[ 118], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 142], 00:26:46.530 | 99.99th=[ 142] 00:26:46.530 bw ( KiB/s): min= 640, max= 1608, per=4.35%, avg=1072.80, stdev=206.08, samples=20 00:26:46.530 iops : min= 160, max= 402, avg=268.20, stdev=51.52, samples=20 00:26:46.530 lat (msec) : 10=0.59%, 20=0.59%, 50=39.92%, 100=53.60%, 250=5.30% 00:26:46.530 cpu : usr=33.01%, sys=0.42%, ctx=928, majf=0, minf=9 00:26:46.531 IO depths : 1=0.7%, 2=1.6%, 4=8.1%, 8=76.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102664: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=224, BW=899KiB/s (920kB/s)(9008KiB/10024msec) 00:26:46.531 slat (usec): min=3, max=7465, avg=19.97, stdev=231.66 00:26:46.531 clat (msec): min=20, max=156, avg=71.08, stdev=22.65 00:26:46.531 lat (msec): min=20, max=156, avg=71.10, stdev=22.65 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 56], 00:26:46.531 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 73], 00:26:46.531 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 115], 00:26:46.531 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:26:46.531 | 99.99th=[ 157] 00:26:46.531 bw ( KiB/s): min= 640, max= 1280, per=3.60%, avg=887.58, stdev=153.32, samples=19 00:26:46.531 iops : min= 160, max= 320, avg=221.89, stdev=38.33, samples=19 00:26:46.531 lat (msec) : 50=15.10%, 100=75.18%, 250=9.72% 00:26:46.531 cpu : usr=35.55%, sys=0.59%, ctx=1028, majf=0, minf=9 00:26:46.531 IO depths : 1=1.6%, 2=3.7%, 4=13.8%, 8=69.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102665: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=272, BW=1088KiB/s (1114kB/s)(10.7MiB/10030msec) 00:26:46.531 slat (usec): min=4, max=8060, avg=16.42, stdev=172.31 00:26:46.531 clat (msec): min=14, max=138, avg=58.66, stdev=20.61 00:26:46.531 lat (msec): min=14, max=138, avg=58.67, stdev=20.61 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 42], 00:26:46.531 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 62], 00:26:46.531 | 70.00th=[ 66], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 96], 00:26:46.531 | 99.00th=[ 127], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:26:46.531 | 99.99th=[ 138] 00:26:46.531 bw ( KiB/s): min= 688, max= 1584, per=4.40%, avg=1085.05, stdev=233.77, samples=20 00:26:46.531 iops : min= 172, max= 396, avg=271.20, stdev=58.49, samples=20 00:26:46.531 lat (msec) : 20=1.65%, 50=35.40%, 100=59.44%, 250=3.52% 00:26:46.531 cpu : usr=45.43%, sys=0.50%, ctx=1366, majf=0, minf=9 00:26:46.531 IO depths : 1=1.0%, 2=2.3%, 4=10.2%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102666: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=245, BW=980KiB/s (1004kB/s)(9812KiB/10010msec) 00:26:46.531 slat (usec): min=3, max=8039, avg=22.05, stdev=270.07 00:26:46.531 clat (msec): min=20, max=137, avg=65.17, stdev=21.18 00:26:46.531 lat (msec): min=20, max=137, avg=65.19, stdev=21.18 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 48], 00:26:46.531 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:26:46.531 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 101], 00:26:46.531 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:26:46.531 | 99.99th=[ 138] 00:26:46.531 bw ( KiB/s): min= 680, max= 1440, per=3.97%, avg=978.58, stdev=186.91, samples=19 00:26:46.531 iops : min= 170, max= 360, avg=244.63, stdev=46.74, samples=19 00:26:46.531 lat (msec) : 50=27.92%, 100=67.22%, 250=4.85% 00:26:46.531 cpu : usr=32.70%, sys=0.42%, ctx=864, majf=0, minf=9 00:26:46.531 IO depths : 1=0.3%, 2=0.8%, 4=6.7%, 8=77.9%, 16=14.3%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=89.4%, 8=7.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102667: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=238, BW=955KiB/s (978kB/s)(9548KiB/10001msec) 00:26:46.531 slat (usec): min=4, max=8033, avg=21.14, stdev=262.59 00:26:46.531 clat (msec): min=2, max=155, avg=66.94, stdev=24.18 00:26:46.531 lat (msec): min=2, max=155, avg=66.96, stdev=24.17 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 48], 00:26:46.531 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:26:46.531 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 110], 00:26:46.531 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:46.531 | 99.99th=[ 157] 00:26:46.531 bw ( KiB/s): min= 640, max= 1408, per=3.81%, avg=940.21, stdev=176.15, samples=19 00:26:46.531 iops : min= 160, max= 352, avg=235.05, stdev=44.04, samples=19 00:26:46.531 lat (msec) : 4=0.84%, 10=0.50%, 50=20.49%, 100=69.92%, 250=8.25% 00:26:46.531 cpu : usr=35.26%, sys=0.45%, ctx=985, majf=0, minf=9 00:26:46.531 IO depths : 1=1.6%, 2=3.6%, 4=11.9%, 8=71.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102668: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=237, BW=949KiB/s (972kB/s)(9512KiB/10019msec) 00:26:46.531 slat (usec): min=4, max=8029, avg=29.28, stdev=366.81 00:26:46.531 clat (msec): min=15, max=131, avg=67.22, stdev=20.55 00:26:46.531 lat (msec): min=15, max=131, avg=67.25, stdev=20.55 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 54], 00:26:46.531 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:26:46.531 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 103], 00:26:46.531 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:26:46.531 | 99.99th=[ 132] 00:26:46.531 bw ( KiB/s): min= 768, max= 1352, per=3.81%, avg=940.63, stdev=145.51, samples=19 00:26:46.531 iops : min= 192, max= 338, avg=235.16, stdev=36.38, samples=19 00:26:46.531 lat (msec) : 20=0.46%, 50=18.33%, 100=75.44%, 250=5.76% 00:26:46.531 cpu : usr=37.83%, sys=0.53%, ctx=1021, majf=0, minf=9 00:26:46.531 IO depths : 1=1.9%, 2=4.2%, 4=13.1%, 8=70.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=90.6%, 8=4.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102669: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=235, BW=943KiB/s (966kB/s)(9432KiB/10001msec) 00:26:46.531 slat (usec): min=4, max=4033, avg=17.09, stdev=133.23 00:26:46.531 clat (msec): min=2, max=142, avg=67.70, stdev=24.42 00:26:46.531 lat (msec): min=2, max=142, avg=67.72, stdev=24.42 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 6], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 48], 00:26:46.531 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 72], 00:26:46.531 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 110], 00:26:46.531 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:26:46.531 | 99.99th=[ 142] 00:26:46.531 bw ( KiB/s): min= 624, max= 1408, per=3.75%, avg=924.63, stdev=207.90, samples=19 00:26:46.531 iops : min= 156, max= 352, avg=231.16, stdev=51.98, samples=19 00:26:46.531 lat (msec) : 4=0.68%, 10=0.68%, 20=0.38%, 50=19.68%, 100=69.59% 00:26:46.531 lat (msec) : 250=8.99% 00:26:46.531 cpu : usr=44.41%, sys=0.58%, ctx=1209, majf=0, minf=9 00:26:46.531 IO depths : 1=3.2%, 2=6.8%, 4=17.3%, 8=63.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename0: (groupid=0, jobs=1): err= 0: pid=102670: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=228, BW=916KiB/s (938kB/s)(9160KiB/10002msec) 00:26:46.531 slat (usec): min=4, max=8031, avg=18.05, stdev=180.58 00:26:46.531 clat (msec): min=4, max=179, avg=69.74, stdev=22.62 00:26:46.531 lat (msec): min=4, max=179, avg=69.76, stdev=22.62 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 57], 00:26:46.531 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:26:46.531 | 70.00th=[ 78], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 109], 00:26:46.531 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:26:46.531 | 99.99th=[ 180] 00:26:46.531 bw ( KiB/s): min= 640, max= 1280, per=3.66%, avg=903.58, stdev=153.78, samples=19 00:26:46.531 iops : min= 160, max= 320, avg=225.89, stdev=38.44, samples=19 00:26:46.531 lat (msec) : 10=0.70%, 50=16.77%, 100=74.59%, 250=7.95% 00:26:46.531 cpu : usr=32.71%, sys=0.46%, ctx=909, majf=0, minf=9 00:26:46.531 IO depths : 1=2.3%, 2=4.9%, 4=13.8%, 8=67.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename1: (groupid=0, jobs=1): err= 0: pid=102671: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=238, BW=953KiB/s (976kB/s)(9556KiB/10029msec) 00:26:46.531 slat (usec): min=3, max=8036, avg=18.59, stdev=188.05 00:26:46.531 clat (msec): min=21, max=158, avg=67.01, stdev=18.86 00:26:46.531 lat (msec): min=21, max=158, avg=67.03, stdev=18.86 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 55], 00:26:46.531 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:26:46.531 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 99], 00:26:46.531 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 159], 99.95th=[ 159], 00:26:46.531 | 99.99th=[ 159] 00:26:46.531 bw ( KiB/s): min= 768, max= 1408, per=3.85%, avg=949.05, stdev=145.35, samples=20 00:26:46.531 iops : min= 192, max= 352, avg=237.25, stdev=36.33, samples=20 00:26:46.531 lat (msec) : 50=15.70%, 100=79.78%, 250=4.52% 00:26:46.531 cpu : usr=35.57%, sys=0.72%, ctx=1243, majf=0, minf=9 00:26:46.531 IO depths : 1=2.3%, 2=5.3%, 4=14.6%, 8=67.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename1: (groupid=0, jobs=1): err= 0: pid=102672: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=231, BW=925KiB/s (948kB/s)(9268KiB/10016msec) 00:26:46.531 slat (usec): min=3, max=8063, avg=22.35, stdev=288.83 00:26:46.531 clat (msec): min=21, max=144, avg=68.88, stdev=22.22 00:26:46.531 lat (msec): min=21, max=144, avg=68.91, stdev=22.23 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 52], 00:26:46.531 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:26:46.531 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:26:46.531 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:26:46.531 | 99.99th=[ 144] 00:26:46.531 bw ( KiB/s): min= 624, max= 1456, per=3.76%, avg=928.47, stdev=219.15, samples=19 00:26:46.531 iops : min= 156, max= 364, avg=232.11, stdev=54.80, samples=19 00:26:46.531 lat (msec) : 50=18.77%, 100=72.46%, 250=8.76% 00:26:46.531 cpu : usr=35.36%, sys=0.49%, ctx=928, majf=0, minf=9 00:26:46.531 IO depths : 1=1.9%, 2=4.5%, 4=14.0%, 8=68.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 issued rwts: total=2317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.531 filename1: (groupid=0, jobs=1): err= 0: pid=102673: Thu Dec 5 14:32:52 2024 00:26:46.531 read: IOPS=257, BW=1030KiB/s (1055kB/s)(10.1MiB/10051msec) 00:26:46.531 slat (usec): min=3, max=8030, avg=19.14, stdev=222.61 00:26:46.531 clat (msec): min=18, max=146, avg=61.97, stdev=20.59 00:26:46.531 lat (msec): min=18, max=146, avg=61.98, stdev=20.60 00:26:46.531 clat percentiles (msec): 00:26:46.531 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 45], 00:26:46.531 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:26:46.531 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 96], 00:26:46.531 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:26:46.531 | 99.99th=[ 146] 00:26:46.531 bw ( KiB/s): min= 696, max= 1808, per=4.17%, avg=1028.80, stdev=236.00, samples=20 00:26:46.531 iops : min= 174, max= 452, avg=257.20, stdev=59.00, samples=20 00:26:46.531 lat (msec) : 20=0.62%, 50=31.49%, 100=63.76%, 250=4.13% 00:26:46.531 cpu : usr=32.87%, sys=0.38%, ctx=882, majf=0, minf=9 00:26:46.531 IO depths : 1=0.9%, 2=2.1%, 4=9.9%, 8=74.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:46.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.531 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename1: (groupid=0, jobs=1): err= 0: pid=102674: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=285, BW=1142KiB/s (1169kB/s)(11.2MiB/10027msec) 00:26:46.532 slat (usec): min=4, max=4042, avg=12.46, stdev=75.66 00:26:46.532 clat (msec): min=18, max=129, avg=55.97, stdev=18.95 00:26:46.532 lat (msec): min=18, max=129, avg=55.98, stdev=18.95 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 41], 00:26:46.532 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 59], 00:26:46.532 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 91], 00:26:46.532 | 99.00th=[ 115], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 130], 00:26:46.532 | 99.99th=[ 130] 00:26:46.532 bw ( KiB/s): min= 768, max= 1632, per=4.61%, avg=1138.35, stdev=217.60, samples=20 00:26:46.532 iops : min= 192, max= 408, avg=284.55, stdev=54.44, samples=20 00:26:46.532 lat (msec) : 20=0.52%, 50=44.34%, 100=53.00%, 250=2.13% 00:26:46.532 cpu : usr=41.84%, sys=0.62%, ctx=1081, majf=0, minf=9 00:26:46.532 IO depths : 1=0.2%, 2=0.8%, 4=7.0%, 8=78.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.2%, 8=6.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename1: (groupid=0, jobs=1): err= 0: pid=102675: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=293, BW=1174KiB/s (1202kB/s)(11.5MiB/10039msec) 00:26:46.532 slat (usec): min=3, max=8028, avg=20.25, stdev=234.87 00:26:46.532 clat (msec): min=15, max=129, avg=54.32, stdev=19.57 00:26:46.532 lat (msec): min=15, max=129, avg=54.34, stdev=19.56 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 39], 00:26:46.532 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 53], 60.00th=[ 57], 00:26:46.532 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 79], 95.00th=[ 93], 00:26:46.532 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 130], 00:26:46.532 | 99.99th=[ 130] 00:26:46.532 bw ( KiB/s): min= 769, max= 1587, per=4.75%, avg=1172.60, stdev=245.73, samples=20 00:26:46.532 iops : min= 192, max= 396, avg=293.00, stdev=61.34, samples=20 00:26:46.532 lat (msec) : 20=1.09%, 50=46.25%, 100=50.19%, 250=2.48% 00:26:46.532 cpu : usr=48.34%, sys=0.39%, ctx=1348, majf=0, minf=9 00:26:46.532 IO depths : 1=0.4%, 2=1.0%, 4=7.1%, 8=78.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename1: (groupid=0, jobs=1): err= 0: pid=102676: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=290, BW=1163KiB/s (1191kB/s)(11.4MiB/10056msec) 00:26:46.532 slat (usec): min=4, max=8021, avg=18.55, stdev=216.99 00:26:46.532 clat (msec): min=5, max=120, avg=54.85, stdev=21.35 00:26:46.532 lat (msec): min=5, max=120, avg=54.87, stdev=21.36 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 32], 20.00th=[ 38], 00:26:46.532 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 53], 60.00th=[ 59], 00:26:46.532 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 96], 00:26:46.532 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 122], 00:26:46.532 | 99.99th=[ 122] 00:26:46.532 bw ( KiB/s): min= 816, max= 1792, per=4.72%, avg=1163.05, stdev=282.90, samples=20 00:26:46.532 iops : min= 204, max= 448, avg=290.75, stdev=70.73, samples=20 00:26:46.532 lat (msec) : 10=1.09%, 20=2.02%, 50=42.82%, 100=51.06%, 250=3.01% 00:26:46.532 cpu : usr=43.93%, sys=0.54%, ctx=1407, majf=0, minf=1 00:26:46.532 IO depths : 1=0.7%, 2=1.7%, 4=9.1%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename1: (groupid=0, jobs=1): err= 0: pid=102677: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=261, BW=1046KiB/s (1071kB/s)(10.2MiB/10022msec) 00:26:46.532 slat (usec): min=4, max=7027, avg=14.81, stdev=143.06 00:26:46.532 clat (msec): min=19, max=130, avg=61.07, stdev=19.67 00:26:46.532 lat (msec): min=19, max=130, avg=61.09, stdev=19.67 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 44], 00:26:46.532 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:26:46.532 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 100], 00:26:46.532 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 131], 99.95th=[ 131], 00:26:46.532 | 99.99th=[ 131] 00:26:46.532 bw ( KiB/s): min= 768, max= 1384, per=4.23%, avg=1042.00, stdev=154.47, samples=20 00:26:46.532 iops : min= 192, max= 346, avg=260.45, stdev=38.63, samples=20 00:26:46.532 lat (msec) : 20=0.11%, 50=31.13%, 100=65.36%, 250=3.40% 00:26:46.532 cpu : usr=42.57%, sys=0.58%, ctx=1356, majf=0, minf=9 00:26:46.532 IO depths : 1=2.0%, 2=4.3%, 4=11.9%, 8=70.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename1: (groupid=0, jobs=1): err= 0: pid=102678: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=294, BW=1178KiB/s (1207kB/s)(11.5MiB/10035msec) 00:26:46.532 slat (usec): min=3, max=8027, avg=22.36, stdev=249.10 00:26:46.532 clat (msec): min=15, max=120, avg=54.10, stdev=17.85 00:26:46.532 lat (msec): min=15, max=120, avg=54.12, stdev=17.85 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 40], 00:26:46.532 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 57], 00:26:46.532 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 88], 00:26:46.532 | 99.00th=[ 100], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:26:46.532 | 99.99th=[ 121] 00:26:46.532 bw ( KiB/s): min= 768, max= 1904, per=4.78%, avg=1178.40, stdev=240.21, samples=20 00:26:46.532 iops : min= 192, max= 476, avg=294.60, stdev=60.05, samples=20 00:26:46.532 lat (msec) : 20=1.08%, 50=45.09%, 100=52.94%, 250=0.88% 00:26:46.532 cpu : usr=44.85%, sys=0.66%, ctx=1223, majf=0, minf=9 00:26:46.532 IO depths : 1=0.5%, 2=1.3%, 4=7.4%, 8=77.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename2: (groupid=0, jobs=1): err= 0: pid=102679: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=269, BW=1077KiB/s (1103kB/s)(10.6MiB/10037msec) 00:26:46.532 slat (usec): min=3, max=4335, avg=13.63, stdev=92.04 00:26:46.532 clat (msec): min=13, max=161, avg=59.29, stdev=23.22 00:26:46.532 lat (msec): min=13, max=161, avg=59.31, stdev=23.22 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 40], 00:26:46.532 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 61], 00:26:46.532 | 70.00th=[ 68], 80.00th=[ 78], 90.00th=[ 93], 95.00th=[ 104], 00:26:46.532 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 163], 99.95th=[ 163], 00:26:46.532 | 99.99th=[ 163] 00:26:46.532 bw ( KiB/s): min= 560, max= 1760, per=4.35%, avg=1074.80, stdev=268.34, samples=20 00:26:46.532 iops : min= 140, max= 440, avg=268.70, stdev=67.08, samples=20 00:26:46.532 lat (msec) : 20=1.26%, 50=37.22%, 100=55.23%, 250=6.29% 00:26:46.532 cpu : usr=41.38%, sys=0.56%, ctx=1480, majf=0, minf=9 00:26:46.532 IO depths : 1=1.1%, 2=2.4%, 4=8.9%, 8=75.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename2: (groupid=0, jobs=1): err= 0: pid=102680: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.79MiB/10032msec) 00:26:46.532 slat (nsec): min=3319, max=50881, avg=11902.18, stdev=7223.49 00:26:46.532 clat (msec): min=20, max=146, avg=63.93, stdev=22.89 00:26:46.532 lat (msec): min=20, max=146, avg=63.94, stdev=22.89 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 47], 00:26:46.532 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 68], 00:26:46.532 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:26:46.532 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 146], 00:26:46.532 | 99.99th=[ 146] 00:26:46.532 bw ( KiB/s): min= 640, max= 1456, per=4.03%, avg=995.00, stdev=204.85, samples=20 00:26:46.532 iops : min= 160, max= 364, avg=248.75, stdev=51.21, samples=20 00:26:46.532 lat (msec) : 50=30.46%, 100=61.92%, 250=7.62% 00:26:46.532 cpu : usr=32.72%, sys=0.42%, ctx=869, majf=0, minf=9 00:26:46.532 IO depths : 1=1.0%, 2=2.2%, 4=8.7%, 8=75.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.8%, 8=6.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename2: (groupid=0, jobs=1): err= 0: pid=102681: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10026msec) 00:26:46.532 slat (usec): min=4, max=6988, avg=16.72, stdev=156.82 00:26:46.532 clat (msec): min=20, max=126, avg=60.62, stdev=21.27 00:26:46.532 lat (msec): min=20, max=126, avg=60.64, stdev=21.27 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 41], 00:26:46.532 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 62], 00:26:46.532 | 70.00th=[ 70], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 100], 00:26:46.532 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:26:46.532 | 99.99th=[ 128] 00:26:46.532 bw ( KiB/s): min= 640, max= 1456, per=4.27%, avg=1052.00, stdev=208.92, samples=20 00:26:46.532 iops : min= 160, max= 364, avg=262.95, stdev=52.26, samples=20 00:26:46.532 lat (msec) : 50=38.13%, 100=57.21%, 250=4.66% 00:26:46.532 cpu : usr=39.92%, sys=0.55%, ctx=1261, majf=0, minf=9 00:26:46.532 IO depths : 1=0.3%, 2=0.7%, 4=5.5%, 8=79.3%, 16=14.2%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.0%, 8=7.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename2: (groupid=0, jobs=1): err= 0: pid=102682: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=228, BW=914KiB/s (936kB/s)(9164KiB/10030msec) 00:26:46.532 slat (usec): min=3, max=4032, avg=17.33, stdev=131.77 00:26:46.532 clat (msec): min=16, max=175, avg=69.90, stdev=20.06 00:26:46.532 lat (msec): min=16, max=175, avg=69.92, stdev=20.07 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 57], 00:26:46.532 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 73], 00:26:46.532 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 101], 00:26:46.532 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 176], 99.95th=[ 176], 00:26:46.532 | 99.99th=[ 176] 00:26:46.532 bw ( KiB/s): min= 640, max= 1408, per=3.69%, avg=909.70, stdev=159.29, samples=20 00:26:46.532 iops : min= 160, max= 352, avg=227.40, stdev=39.81, samples=20 00:26:46.532 lat (msec) : 20=0.44%, 50=13.57%, 100=80.93%, 250=5.06% 00:26:46.532 cpu : usr=38.83%, sys=0.53%, ctx=1151, majf=0, minf=9 00:26:46.532 IO depths : 1=3.1%, 2=6.8%, 4=17.9%, 8=62.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=2291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename2: (groupid=0, jobs=1): err= 0: pid=102683: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=302, BW=1211KiB/s (1240kB/s)(11.9MiB/10037msec) 00:26:46.532 slat (usec): min=4, max=4025, avg=15.95, stdev=130.90 00:26:46.532 clat (msec): min=6, max=127, avg=52.70, stdev=20.01 00:26:46.532 lat (msec): min=6, max=127, avg=52.71, stdev=20.01 00:26:46.532 clat percentiles (msec): 00:26:46.532 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 39], 00:26:46.532 | 30.00th=[ 42], 40.00th=[ 47], 50.00th=[ 53], 60.00th=[ 57], 00:26:46.532 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 80], 95.00th=[ 88], 00:26:46.532 | 99.00th=[ 107], 99.50th=[ 112], 99.90th=[ 128], 99.95th=[ 128], 00:26:46.532 | 99.99th=[ 128] 00:26:46.532 bw ( KiB/s): min= 856, max= 2240, per=4.90%, avg=1208.35, stdev=308.63, samples=20 00:26:46.532 iops : min= 214, max= 560, avg=302.05, stdev=77.12, samples=20 00:26:46.532 lat (msec) : 10=1.09%, 20=4.44%, 50=40.82%, 100=52.24%, 250=1.42% 00:26:46.532 cpu : usr=43.71%, sys=0.68%, ctx=1280, majf=0, minf=9 00:26:46.532 IO depths : 1=0.9%, 2=2.1%, 4=9.2%, 8=75.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:46.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.532 issued rwts: total=3038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.532 filename2: (groupid=0, jobs=1): err= 0: pid=102684: Thu Dec 5 14:32:52 2024 00:26:46.532 read: IOPS=255, BW=1022KiB/s (1047kB/s)(10.0MiB/10028msec) 00:26:46.532 slat (usec): min=4, max=8065, avg=23.25, stdev=285.92 00:26:46.532 clat (msec): min=19, max=152, avg=62.41, stdev=21.52 00:26:46.532 lat (msec): min=19, max=152, avg=62.43, stdev=21.52 00:26:46.533 clat percentiles (msec): 00:26:46.533 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 46], 00:26:46.533 | 30.00th=[ 49], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:46.533 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 101], 00:26:46.533 | 99.00th=[ 120], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:26:46.533 | 99.99th=[ 153] 00:26:46.533 bw ( KiB/s): min= 640, max= 1632, per=4.14%, avg=1022.20, stdev=221.46, samples=20 00:26:46.533 iops : min= 160, max= 408, avg=255.50, stdev=55.38, samples=20 00:26:46.533 lat (msec) : 20=0.39%, 50=31.49%, 100=63.13%, 250=4.99% 00:26:46.533 cpu : usr=32.78%, sys=0.37%, ctx=924, majf=0, minf=9 00:26:46.533 IO depths : 1=0.7%, 2=1.8%, 4=8.8%, 8=75.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:46.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.533 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.533 issued rwts: total=2563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.533 filename2: (groupid=0, jobs=1): err= 0: pid=102685: Thu Dec 5 14:32:52 2024 00:26:46.533 read: IOPS=273, BW=1096KiB/s (1122kB/s)(10.7MiB/10037msec) 00:26:46.533 slat (nsec): min=4299, max=73520, avg=11545.94, stdev=7085.31 00:26:46.533 clat (msec): min=3, max=132, avg=58.26, stdev=20.21 00:26:46.533 lat (msec): min=3, max=132, avg=58.27, stdev=20.21 00:26:46.533 clat percentiles (msec): 00:26:46.533 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 41], 00:26:46.533 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:26:46.533 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:26:46.533 | 99.00th=[ 112], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:26:46.533 | 99.99th=[ 132] 00:26:46.533 bw ( KiB/s): min= 848, max= 1805, per=4.43%, avg=1092.65, stdev=227.55, samples=20 00:26:46.533 iops : min= 212, max= 451, avg=273.15, stdev=56.85, samples=20 00:26:46.533 lat (msec) : 4=0.25%, 10=1.49%, 20=0.25%, 50=36.23%, 100=58.38% 00:26:46.533 lat (msec) : 250=3.38% 00:26:46.533 cpu : usr=35.05%, sys=0.45%, ctx=969, majf=0, minf=9 00:26:46.533 IO depths : 1=0.5%, 2=1.1%, 4=6.9%, 8=78.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:46.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.533 complete : 0=0.0%, 4=89.2%, 8=6.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.533 issued rwts: total=2749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.533 filename2: (groupid=0, jobs=1): err= 0: pid=102686: Thu Dec 5 14:32:52 2024 00:26:46.533 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.99MiB/10045msec) 00:26:46.533 slat (usec): min=3, max=8019, avg=18.88, stdev=223.62 00:26:46.533 clat (msec): min=18, max=143, avg=62.64, stdev=20.42 00:26:46.533 lat (msec): min=18, max=143, avg=62.66, stdev=20.42 00:26:46.533 clat percentiles (msec): 00:26:46.533 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 46], 00:26:46.533 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:26:46.533 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 96], 00:26:46.533 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:26:46.533 | 99.99th=[ 144] 00:26:46.533 bw ( KiB/s): min= 640, max= 1490, per=4.13%, avg=1018.90, stdev=195.23, samples=20 00:26:46.533 iops : min= 160, max= 372, avg=254.60, stdev=48.77, samples=20 00:26:46.533 lat (msec) : 20=0.43%, 50=28.71%, 100=66.56%, 250=4.30% 00:26:46.533 cpu : usr=35.25%, sys=0.57%, ctx=932, majf=0, minf=9 00:26:46.533 IO depths : 1=1.1%, 2=2.9%, 4=11.4%, 8=72.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:46.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.533 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.533 issued rwts: total=2557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:46.533 00:26:46.533 Run status group 0 (all jobs): 00:26:46.533 READ: bw=24.1MiB/s (25.3MB/s), 899KiB/s-1211KiB/s (920kB/s-1240kB/s), io=243MiB (255MB), run=10001-10088msec 00:26:46.791 14:32:52 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:46.791 14:32:52 -- target/dif.sh@43 -- # local sub 00:26:46.791 14:32:52 -- target/dif.sh@45 -- # for sub in "$@" 00:26:46.791 14:32:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:46.791 14:32:52 -- target/dif.sh@36 -- # local sub_id=0 00:26:46.791 14:32:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:46.791 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.791 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.791 14:32:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:46.791 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.791 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.791 14:32:52 -- target/dif.sh@45 -- # for sub in "$@" 00:26:46.791 14:32:52 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:46.791 14:32:52 -- target/dif.sh@36 -- # local sub_id=1 00:26:46.791 14:32:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@45 -- # for sub in "$@" 00:26:46.792 14:32:52 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:46.792 14:32:52 -- target/dif.sh@36 -- # local sub_id=2 00:26:46.792 14:32:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:46.792 14:32:52 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:46.792 14:32:52 -- target/dif.sh@115 -- # numjobs=2 00:26:46.792 14:32:52 -- target/dif.sh@115 -- # iodepth=8 00:26:46.792 14:32:52 -- target/dif.sh@115 -- # runtime=5 00:26:46.792 14:32:52 -- target/dif.sh@115 -- # files=1 00:26:46.792 14:32:52 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:46.792 14:32:52 -- target/dif.sh@28 -- # local sub 00:26:46.792 14:32:52 -- target/dif.sh@30 -- # for sub in "$@" 00:26:46.792 14:32:52 -- target/dif.sh@31 -- # create_subsystem 0 00:26:46.792 14:32:52 -- target/dif.sh@18 -- # local sub_id=0 00:26:46.792 14:32:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 bdev_null0 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 [2024-12-05 14:32:52.415954] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@30 -- # for sub in "$@" 00:26:46.792 14:32:52 -- target/dif.sh@31 -- # create_subsystem 1 00:26:46.792 14:32:52 -- target/dif.sh@18 -- # local sub_id=1 00:26:46.792 14:32:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 bdev_null1 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:46.792 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.792 14:32:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:46.792 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.792 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:47.051 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.051 14:32:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.051 14:32:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.051 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:26:47.051 14:32:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.051 14:32:52 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:47.051 14:32:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.051 14:32:52 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:47.051 14:32:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.051 14:32:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:47.051 14:32:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:47.051 14:32:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:47.051 14:32:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:47.051 14:32:52 -- nvmf/common.sh@520 -- # config=() 00:26:47.051 14:32:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:47.051 14:32:52 -- common/autotest_common.sh@1330 -- # shift 00:26:47.051 14:32:52 -- target/dif.sh@82 -- # gen_fio_conf 00:26:47.051 14:32:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:47.051 14:32:52 -- nvmf/common.sh@520 -- # local subsystem config 00:26:47.051 14:32:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.051 14:32:52 -- target/dif.sh@54 -- # local file 00:26:47.051 14:32:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.051 14:32:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.051 { 00:26:47.051 "params": { 00:26:47.051 "name": "Nvme$subsystem", 00:26:47.051 "trtype": "$TEST_TRANSPORT", 00:26:47.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.051 "adrfam": "ipv4", 00:26:47.051 "trsvcid": "$NVMF_PORT", 00:26:47.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.051 "hdgst": ${hdgst:-false}, 00:26:47.051 "ddgst": ${ddgst:-false} 00:26:47.051 }, 00:26:47.051 "method": "bdev_nvme_attach_controller" 00:26:47.051 } 00:26:47.051 EOF 00:26:47.051 )") 00:26:47.051 14:32:52 -- target/dif.sh@56 -- # cat 00:26:47.051 14:32:52 -- nvmf/common.sh@542 -- # cat 00:26:47.051 14:32:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:47.051 14:32:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:47.051 14:32:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:47.051 14:32:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:47.051 14:32:52 -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.051 14:32:52 -- target/dif.sh@73 -- # cat 00:26:47.051 14:32:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.051 14:32:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.051 { 00:26:47.051 "params": { 00:26:47.051 "name": "Nvme$subsystem", 00:26:47.051 "trtype": "$TEST_TRANSPORT", 00:26:47.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.051 "adrfam": "ipv4", 00:26:47.051 "trsvcid": "$NVMF_PORT", 00:26:47.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.051 "hdgst": ${hdgst:-false}, 00:26:47.051 "ddgst": ${ddgst:-false} 00:26:47.051 }, 00:26:47.051 "method": "bdev_nvme_attach_controller" 00:26:47.051 } 00:26:47.051 EOF 00:26:47.051 )") 00:26:47.051 14:32:52 -- nvmf/common.sh@542 -- # cat 00:26:47.051 14:32:52 -- target/dif.sh@72 -- # (( file++ )) 00:26:47.051 14:32:52 -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.051 14:32:52 -- nvmf/common.sh@544 -- # jq . 00:26:47.051 14:32:52 -- nvmf/common.sh@545 -- # IFS=, 00:26:47.051 14:32:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:47.051 "params": { 00:26:47.051 "name": "Nvme0", 00:26:47.051 "trtype": "tcp", 00:26:47.051 "traddr": "10.0.0.2", 00:26:47.051 "adrfam": "ipv4", 00:26:47.051 "trsvcid": "4420", 00:26:47.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:47.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:47.051 "hdgst": false, 00:26:47.051 "ddgst": false 00:26:47.051 }, 00:26:47.051 "method": "bdev_nvme_attach_controller" 00:26:47.051 },{ 00:26:47.051 "params": { 00:26:47.051 "name": "Nvme1", 00:26:47.051 "trtype": "tcp", 00:26:47.052 "traddr": "10.0.0.2", 00:26:47.052 "adrfam": "ipv4", 00:26:47.052 "trsvcid": "4420", 00:26:47.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:47.052 "hdgst": false, 00:26:47.052 "ddgst": false 00:26:47.052 }, 00:26:47.052 "method": "bdev_nvme_attach_controller" 00:26:47.052 }' 00:26:47.052 14:32:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:47.052 14:32:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:47.052 14:32:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.052 14:32:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:47.052 14:32:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:47.052 14:32:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:47.052 14:32:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:47.052 14:32:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:47.052 14:32:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:47.052 14:32:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.052 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:47.052 ... 00:26:47.052 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:47.052 ... 00:26:47.052 fio-3.35 00:26:47.052 Starting 4 threads 00:26:47.619 [2024-12-05 14:32:53.176088] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:47.619 [2024-12-05 14:32:53.176178] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:52.911 00:26:52.911 filename0: (groupid=0, jobs=1): err= 0: pid=102818: Thu Dec 5 14:32:58 2024 00:26:52.911 read: IOPS=2209, BW=17.3MiB/s (18.1MB/s)(86.3MiB/5001msec) 00:26:52.911 slat (nsec): min=3261, max=89327, avg=13645.13, stdev=7031.32 00:26:52.911 clat (usec): min=1347, max=6442, avg=3555.38, stdev=198.99 00:26:52.911 lat (usec): min=1358, max=6448, avg=3569.03, stdev=199.14 00:26:52.911 clat percentiles (usec): 00:26:52.911 | 1.00th=[ 3130], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3458], 00:26:52.911 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:52.911 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3785], 00:26:52.911 | 99.00th=[ 4146], 99.50th=[ 4424], 99.90th=[ 5342], 99.95th=[ 5669], 00:26:52.911 | 99.99th=[ 6390] 00:26:52.911 bw ( KiB/s): min=17424, max=18048, per=25.00%, avg=17682.00, stdev=208.24, samples=9 00:26:52.911 iops : min= 2178, max= 2256, avg=2210.22, stdev=26.07, samples=9 00:26:52.911 lat (msec) : 2=0.18%, 4=98.49%, 10=1.33% 00:26:52.911 cpu : usr=95.38%, sys=3.40%, ctx=110, majf=0, minf=0 00:26:52.911 IO depths : 1=7.1%, 2=25.0%, 4=50.0%, 8=17.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 issued rwts: total=11048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:52.911 filename0: (groupid=0, jobs=1): err= 0: pid=102819: Thu Dec 5 14:32:58 2024 00:26:52.911 read: IOPS=2210, BW=17.3MiB/s (18.1MB/s)(86.4MiB/5001msec) 00:26:52.911 slat (nsec): min=5791, max=79045, avg=11145.59, stdev=7677.30 00:26:52.911 clat (usec): min=1205, max=6378, avg=3571.20, stdev=268.02 00:26:52.911 lat (usec): min=1211, max=6385, avg=3582.35, stdev=267.46 00:26:52.911 clat percentiles (usec): 00:26:52.911 | 1.00th=[ 2442], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3490], 00:26:52.911 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:52.911 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3818], 00:26:52.911 | 99.00th=[ 4424], 99.50th=[ 5014], 99.90th=[ 5866], 99.95th=[ 5932], 00:26:52.911 | 99.99th=[ 6194] 00:26:52.911 bw ( KiB/s): min=17536, max=17952, per=25.02%, avg=17697.78, stdev=129.85, samples=9 00:26:52.911 iops : min= 2192, max= 2244, avg=2212.22, stdev=16.23, samples=9 00:26:52.911 lat (msec) : 2=0.17%, 4=97.66%, 10=2.17% 00:26:52.911 cpu : usr=95.18%, sys=3.44%, ctx=6, majf=0, minf=0 00:26:52.911 IO depths : 1=5.4%, 2=18.8%, 4=56.2%, 8=19.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 issued rwts: total=11054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:52.911 filename1: (groupid=0, jobs=1): err= 0: pid=102820: Thu Dec 5 14:32:58 2024 00:26:52.911 read: IOPS=2214, BW=17.3MiB/s (18.1MB/s)(86.6MiB/5003msec) 00:26:52.911 slat (nsec): min=5761, max=74675, avg=8742.78, stdev=5680.54 00:26:52.911 clat (usec): min=1112, max=6488, avg=3568.05, stdev=262.88 00:26:52.911 lat (usec): min=1119, max=6494, avg=3576.79, stdev=262.48 00:26:52.911 clat percentiles (usec): 00:26:52.911 | 1.00th=[ 2638], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3490], 00:26:52.911 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:52.911 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3851], 00:26:52.911 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5735], 99.95th=[ 5866], 00:26:52.911 | 99.99th=[ 5997] 00:26:52.911 bw ( KiB/s): min=17536, max=18176, per=25.06%, avg=17722.67, stdev=211.66, samples=9 00:26:52.911 iops : min= 2192, max= 2272, avg=2215.33, stdev=26.46, samples=9 00:26:52.911 lat (msec) : 2=0.36%, 4=96.93%, 10=2.71% 00:26:52.911 cpu : usr=94.50%, sys=4.04%, ctx=14, majf=0, minf=10 00:26:52.911 IO depths : 1=7.7%, 2=23.8%, 4=51.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 issued rwts: total=11081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:52.911 filename1: (groupid=0, jobs=1): err= 0: pid=102821: Thu Dec 5 14:32:58 2024 00:26:52.911 read: IOPS=2207, BW=17.2MiB/s (18.1MB/s)(86.3MiB/5002msec) 00:26:52.911 slat (usec): min=3, max=289, avg=15.42, stdev= 8.68 00:26:52.911 clat (usec): min=965, max=5988, avg=3553.86, stdev=236.63 00:26:52.911 lat (usec): min=972, max=6010, avg=3569.28, stdev=236.58 00:26:52.911 clat percentiles (usec): 00:26:52.911 | 1.00th=[ 2966], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3458], 00:26:52.911 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:52.911 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3818], 00:26:52.911 | 99.00th=[ 4228], 99.50th=[ 4948], 99.90th=[ 5669], 99.95th=[ 5669], 00:26:52.911 | 99.99th=[ 5997] 00:26:52.911 bw ( KiB/s): min=17408, max=18032, per=24.98%, avg=17667.56, stdev=219.06, samples=9 00:26:52.911 iops : min= 2176, max= 2254, avg=2208.44, stdev=27.38, samples=9 00:26:52.911 lat (usec) : 1000=0.03% 00:26:52.911 lat (msec) : 2=0.25%, 4=98.14%, 10=1.58% 00:26:52.911 cpu : usr=95.70%, sys=2.80%, ctx=75, majf=0, minf=0 00:26:52.911 IO depths : 1=5.0%, 2=22.4%, 4=52.6%, 8=20.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.911 issued rwts: total=11043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:52.911 00:26:52.911 Run status group 0 (all jobs): 00:26:52.911 READ: bw=69.1MiB/s (72.4MB/s), 17.2MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=346MiB (362MB), run=5001-5003msec 00:26:53.171 14:32:58 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:53.171 14:32:58 -- target/dif.sh@43 -- # local sub 00:26:53.171 14:32:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.171 14:32:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:53.171 14:32:58 -- target/dif.sh@36 -- # local sub_id=0 00:26:53.171 14:32:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.171 14:32:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:53.171 14:32:58 -- target/dif.sh@36 -- # local sub_id=1 00:26:53.171 14:32:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 00:26:53.171 real 0m23.799s 00:26:53.171 user 2m8.193s 00:26:53.171 sys 0m3.497s 00:26:53.171 14:32:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:53.171 ************************************ 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 END TEST fio_dif_rand_params 00:26:53.171 ************************************ 00:26:53.171 14:32:58 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:53.171 14:32:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:53.171 14:32:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 ************************************ 00:26:53.171 START TEST fio_dif_digest 00:26:53.171 ************************************ 00:26:53.171 14:32:58 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:53.171 14:32:58 -- target/dif.sh@123 -- # local NULL_DIF 00:26:53.171 14:32:58 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:53.171 14:32:58 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:53.171 14:32:58 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:53.171 14:32:58 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:53.171 14:32:58 -- target/dif.sh@127 -- # numjobs=3 00:26:53.171 14:32:58 -- target/dif.sh@127 -- # iodepth=3 00:26:53.171 14:32:58 -- target/dif.sh@127 -- # runtime=10 00:26:53.171 14:32:58 -- target/dif.sh@128 -- # hdgst=true 00:26:53.171 14:32:58 -- target/dif.sh@128 -- # ddgst=true 00:26:53.171 14:32:58 -- target/dif.sh@130 -- # create_subsystems 0 00:26:53.171 14:32:58 -- target/dif.sh@28 -- # local sub 00:26:53.171 14:32:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.171 14:32:58 -- target/dif.sh@31 -- # create_subsystem 0 00:26:53.171 14:32:58 -- target/dif.sh@18 -- # local sub_id=0 00:26:53.171 14:32:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 bdev_null0 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:53.171 14:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.171 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:26:53.171 [2024-12-05 14:32:58.687240] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.171 14:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.171 14:32:58 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:53.171 14:32:58 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:53.171 14:32:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:53.171 14:32:58 -- nvmf/common.sh@520 -- # config=() 00:26:53.171 14:32:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.171 14:32:58 -- nvmf/common.sh@520 -- # local subsystem config 00:26:53.171 14:32:58 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.171 14:32:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:53.171 14:32:58 -- target/dif.sh@82 -- # gen_fio_conf 00:26:53.171 14:32:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:53.171 14:32:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:53.171 { 00:26:53.171 "params": { 00:26:53.171 "name": "Nvme$subsystem", 00:26:53.171 "trtype": "$TEST_TRANSPORT", 00:26:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.171 "adrfam": "ipv4", 00:26:53.171 "trsvcid": "$NVMF_PORT", 00:26:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.171 "hdgst": ${hdgst:-false}, 00:26:53.171 "ddgst": ${ddgst:-false} 00:26:53.171 }, 00:26:53.171 "method": "bdev_nvme_attach_controller" 00:26:53.171 } 00:26:53.171 EOF 00:26:53.171 )") 00:26:53.171 14:32:58 -- target/dif.sh@54 -- # local file 00:26:53.171 14:32:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:53.171 14:32:58 -- target/dif.sh@56 -- # cat 00:26:53.171 14:32:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:53.171 14:32:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.171 14:32:58 -- common/autotest_common.sh@1330 -- # shift 00:26:53.171 14:32:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:53.171 14:32:58 -- nvmf/common.sh@542 -- # cat 00:26:53.171 14:32:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.171 14:32:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.171 14:32:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:53.171 14:32:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:53.171 14:32:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:53.171 14:32:58 -- nvmf/common.sh@544 -- # jq . 00:26:53.171 14:32:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.171 14:32:58 -- nvmf/common.sh@545 -- # IFS=, 00:26:53.171 14:32:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:53.171 "params": { 00:26:53.171 "name": "Nvme0", 00:26:53.171 "trtype": "tcp", 00:26:53.171 "traddr": "10.0.0.2", 00:26:53.171 "adrfam": "ipv4", 00:26:53.171 "trsvcid": "4420", 00:26:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:53.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:53.171 "hdgst": true, 00:26:53.171 "ddgst": true 00:26:53.171 }, 00:26:53.171 "method": "bdev_nvme_attach_controller" 00:26:53.171 }' 00:26:53.171 14:32:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:53.171 14:32:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:53.172 14:32:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.172 14:32:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.172 14:32:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:53.172 14:32:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:53.172 14:32:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:53.172 14:32:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:53.172 14:32:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:53.172 14:32:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.430 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:53.430 ... 00:26:53.430 fio-3.35 00:26:53.430 Starting 3 threads 00:26:53.689 [2024-12-05 14:32:59.298999] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:53.689 [2024-12-05 14:32:59.299096] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:05.892 00:27:05.892 filename0: (groupid=0, jobs=1): err= 0: pid=102927: Thu Dec 5 14:33:09 2024 00:27:05.892 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10004msec) 00:27:05.892 slat (nsec): min=6221, max=50708, avg=13583.67, stdev=5243.45 00:27:05.892 clat (usec): min=7966, max=54962, avg=12405.11, stdev=8572.94 00:27:05.892 lat (usec): min=7976, max=54972, avg=12418.70, stdev=8573.01 00:27:05.892 clat percentiles (usec): 00:27:05.892 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:27:05.892 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:27:05.892 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[16712], 00:27:05.892 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:27:05.892 | 99.99th=[54789] 00:27:05.892 bw ( KiB/s): min=24576, max=36096, per=33.93%, avg=30935.58, stdev=3334.84, samples=19 00:27:05.892 iops : min= 192, max= 282, avg=241.68, stdev=26.05, samples=19 00:27:05.892 lat (msec) : 10=26.99%, 20=68.42%, 50=0.46%, 100=4.14% 00:27:05.892 cpu : usr=93.85%, sys=4.52%, ctx=89, majf=0, minf=9 00:27:05.892 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.892 issued rwts: total=2416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.892 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:05.892 filename0: (groupid=0, jobs=1): err= 0: pid=102928: Thu Dec 5 14:33:09 2024 00:27:05.892 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(317MiB/10003msec) 00:27:05.892 slat (nsec): min=6400, max=78155, avg=17401.96, stdev=7031.71 00:27:05.892 clat (usec): min=5943, max=19092, avg=11810.29, stdev=2402.39 00:27:05.892 lat (usec): min=5963, max=19110, avg=11827.69, stdev=2402.71 00:27:05.892 clat percentiles (usec): 00:27:05.892 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8848], 00:27:05.892 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:27:05.892 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14222], 95.00th=[14746], 00:27:05.892 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[19006], 00:27:05.892 | 99.99th=[19006] 00:27:05.892 bw ( KiB/s): min=28160, max=36352, per=35.54%, avg=32404.21, stdev=2406.51, samples=19 00:27:05.892 iops : min= 220, max= 284, avg=253.16, stdev=18.80, samples=19 00:27:05.892 lat (msec) : 10=22.32%, 20=77.68% 00:27:05.892 cpu : usr=94.71%, sys=3.84%, ctx=41, majf=0, minf=9 00:27:05.892 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.892 issued rwts: total=2536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.892 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:05.892 filename0: (groupid=0, jobs=1): err= 0: pid=102929: Thu Dec 5 14:33:09 2024 00:27:05.892 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(275MiB/10044msec) 00:27:05.892 slat (nsec): min=6251, max=56237, avg=14835.93, stdev=5554.87 00:27:05.892 clat (usec): min=7693, max=51290, avg=13639.11, stdev=2901.00 00:27:05.892 lat (usec): min=7703, max=51302, avg=13653.95, stdev=2900.64 00:27:05.892 clat percentiles (usec): 00:27:05.892 | 1.00th=[ 8225], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[10552], 00:27:05.892 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:27:05.892 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:27:05.892 | 99.00th=[22152], 99.50th=[23725], 99.90th=[25297], 99.95th=[44827], 00:27:05.892 | 99.99th=[51119] 00:27:05.892 bw ( KiB/s): min=23808, max=32000, per=30.90%, avg=28175.80, stdev=2267.22, samples=20 00:27:05.892 iops : min= 186, max= 250, avg=220.10, stdev=17.69, samples=20 00:27:05.892 lat (msec) : 10=18.79%, 20=79.44%, 50=1.72%, 100=0.05% 00:27:05.892 cpu : usr=94.19%, sys=4.38%, ctx=31, majf=0, minf=9 00:27:05.892 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.892 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.892 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:05.892 00:27:05.892 Run status group 0 (all jobs): 00:27:05.892 READ: bw=89.0MiB/s (93.4MB/s), 27.4MiB/s-31.7MiB/s (28.7MB/s-33.2MB/s), io=894MiB (938MB), run=10003-10044msec 00:27:05.892 14:33:09 -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:05.892 14:33:09 -- target/dif.sh@43 -- # local sub 00:27:05.892 14:33:09 -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.892 14:33:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:05.892 14:33:09 -- target/dif.sh@36 -- # local sub_id=0 00:27:05.892 14:33:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:05.892 14:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.892 14:33:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.892 14:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.892 14:33:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:05.892 14:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.892 14:33:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.892 14:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.892 00:27:05.892 real 0m11.110s 00:27:05.892 user 0m29.044s 00:27:05.892 sys 0m1.580s 00:27:05.892 14:33:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.892 14:33:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.892 ************************************ 00:27:05.892 END TEST fio_dif_digest 00:27:05.892 ************************************ 00:27:05.892 14:33:09 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:05.892 14:33:09 -- target/dif.sh@147 -- # nvmftestfini 00:27:05.892 14:33:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:05.892 14:33:09 -- nvmf/common.sh@116 -- # sync 00:27:05.893 14:33:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:05.893 14:33:09 -- nvmf/common.sh@119 -- # set +e 00:27:05.893 14:33:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:05.893 14:33:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:05.893 rmmod nvme_tcp 00:27:05.893 rmmod nvme_fabrics 00:27:05.893 rmmod nvme_keyring 00:27:05.893 14:33:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:05.893 14:33:09 -- nvmf/common.sh@123 -- # set -e 00:27:05.893 14:33:09 -- nvmf/common.sh@124 -- # return 0 00:27:05.893 14:33:09 -- nvmf/common.sh@477 -- # '[' -n 102155 ']' 00:27:05.893 14:33:09 -- nvmf/common.sh@478 -- # killprocess 102155 00:27:05.893 14:33:09 -- common/autotest_common.sh@936 -- # '[' -z 102155 ']' 00:27:05.893 14:33:09 -- common/autotest_common.sh@940 -- # kill -0 102155 00:27:05.893 14:33:09 -- common/autotest_common.sh@941 -- # uname 00:27:05.893 14:33:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:05.893 14:33:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102155 00:27:05.893 14:33:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:05.893 killing process with pid 102155 00:27:05.893 14:33:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:05.893 14:33:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102155' 00:27:05.893 14:33:09 -- common/autotest_common.sh@955 -- # kill 102155 00:27:05.893 14:33:09 -- common/autotest_common.sh@960 -- # wait 102155 00:27:05.893 14:33:10 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:05.893 14:33:10 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:05.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:05.893 Waiting for block devices as requested 00:27:05.893 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:05.893 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:05.893 14:33:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:05.893 14:33:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:05.893 14:33:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.893 14:33:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:05.893 14:33:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.893 14:33:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:05.893 14:33:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.893 14:33:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:05.893 00:27:05.893 real 1m0.360s 00:27:05.893 user 3m53.949s 00:27:05.893 sys 0m12.945s 00:27:05.893 14:33:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.893 14:33:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.893 ************************************ 00:27:05.893 END TEST nvmf_dif 00:27:05.893 ************************************ 00:27:05.893 14:33:10 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:05.893 14:33:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:05.893 14:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.893 14:33:10 -- common/autotest_common.sh@10 -- # set +x 00:27:05.893 ************************************ 00:27:05.893 START TEST nvmf_abort_qd_sizes 00:27:05.893 ************************************ 00:27:05.893 14:33:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:05.893 * Looking for test storage... 00:27:05.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:05.893 14:33:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:05.893 14:33:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:05.893 14:33:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:05.893 14:33:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:05.893 14:33:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:05.893 14:33:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:05.893 14:33:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:05.893 14:33:11 -- scripts/common.sh@335 -- # IFS=.-: 00:27:05.893 14:33:11 -- scripts/common.sh@335 -- # read -ra ver1 00:27:05.893 14:33:11 -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.893 14:33:11 -- scripts/common.sh@336 -- # read -ra ver2 00:27:05.893 14:33:11 -- scripts/common.sh@337 -- # local 'op=<' 00:27:05.893 14:33:11 -- scripts/common.sh@339 -- # ver1_l=2 00:27:05.893 14:33:11 -- scripts/common.sh@340 -- # ver2_l=1 00:27:05.893 14:33:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:05.893 14:33:11 -- scripts/common.sh@343 -- # case "$op" in 00:27:05.893 14:33:11 -- scripts/common.sh@344 -- # : 1 00:27:05.893 14:33:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:05.893 14:33:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.893 14:33:11 -- scripts/common.sh@364 -- # decimal 1 00:27:05.893 14:33:11 -- scripts/common.sh@352 -- # local d=1 00:27:05.893 14:33:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.893 14:33:11 -- scripts/common.sh@354 -- # echo 1 00:27:05.893 14:33:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:05.893 14:33:11 -- scripts/common.sh@365 -- # decimal 2 00:27:05.893 14:33:11 -- scripts/common.sh@352 -- # local d=2 00:27:05.893 14:33:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.893 14:33:11 -- scripts/common.sh@354 -- # echo 2 00:27:05.893 14:33:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:05.893 14:33:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:05.893 14:33:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:05.893 14:33:11 -- scripts/common.sh@367 -- # return 0 00:27:05.893 14:33:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.893 14:33:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:05.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.893 --rc genhtml_branch_coverage=1 00:27:05.893 --rc genhtml_function_coverage=1 00:27:05.893 --rc genhtml_legend=1 00:27:05.893 --rc geninfo_all_blocks=1 00:27:05.893 --rc geninfo_unexecuted_blocks=1 00:27:05.893 00:27:05.893 ' 00:27:05.893 14:33:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:05.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.893 --rc genhtml_branch_coverage=1 00:27:05.893 --rc genhtml_function_coverage=1 00:27:05.893 --rc genhtml_legend=1 00:27:05.893 --rc geninfo_all_blocks=1 00:27:05.893 --rc geninfo_unexecuted_blocks=1 00:27:05.893 00:27:05.893 ' 00:27:05.893 14:33:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:05.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.893 --rc genhtml_branch_coverage=1 00:27:05.893 --rc genhtml_function_coverage=1 00:27:05.893 --rc genhtml_legend=1 00:27:05.893 --rc geninfo_all_blocks=1 00:27:05.893 --rc geninfo_unexecuted_blocks=1 00:27:05.893 00:27:05.893 ' 00:27:05.893 14:33:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:05.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.893 --rc genhtml_branch_coverage=1 00:27:05.893 --rc genhtml_function_coverage=1 00:27:05.893 --rc genhtml_legend=1 00:27:05.893 --rc geninfo_all_blocks=1 00:27:05.893 --rc geninfo_unexecuted_blocks=1 00:27:05.893 00:27:05.893 ' 00:27:05.893 14:33:11 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:05.893 14:33:11 -- nvmf/common.sh@7 -- # uname -s 00:27:05.893 14:33:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.893 14:33:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.893 14:33:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.893 14:33:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.893 14:33:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.893 14:33:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.893 14:33:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.893 14:33:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.893 14:33:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.893 14:33:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.893 14:33:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c 00:27:05.893 14:33:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=14bce80c-f069-437f-874b-17c4f2b14e5c 00:27:05.893 14:33:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.893 14:33:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.893 14:33:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:05.893 14:33:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:05.893 14:33:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.893 14:33:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.893 14:33:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.893 14:33:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.893 14:33:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.893 14:33:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.893 14:33:11 -- paths/export.sh@5 -- # export PATH 00:27:05.893 14:33:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.893 14:33:11 -- nvmf/common.sh@46 -- # : 0 00:27:05.893 14:33:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:05.893 14:33:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:05.893 14:33:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:05.893 14:33:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.893 14:33:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.893 14:33:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:05.894 14:33:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:05.894 14:33:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:05.894 14:33:11 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:27:05.894 14:33:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:05.894 14:33:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.894 14:33:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:05.894 14:33:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:05.894 14:33:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:05.894 14:33:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.894 14:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:05.894 14:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.894 14:33:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:05.894 14:33:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:05.894 14:33:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:05.894 14:33:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:05.894 14:33:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:05.894 14:33:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:05.894 14:33:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.894 14:33:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.894 14:33:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:05.894 14:33:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:05.894 14:33:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:05.894 14:33:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:05.894 14:33:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:05.894 14:33:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.894 14:33:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:05.894 14:33:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:05.894 14:33:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:05.894 14:33:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:05.894 14:33:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:05.894 14:33:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:05.894 Cannot find device "nvmf_tgt_br" 00:27:05.894 14:33:11 -- nvmf/common.sh@154 -- # true 00:27:05.894 14:33:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:05.894 Cannot find device "nvmf_tgt_br2" 00:27:05.894 14:33:11 -- nvmf/common.sh@155 -- # true 00:27:05.894 14:33:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:05.894 14:33:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:05.894 Cannot find device "nvmf_tgt_br" 00:27:05.894 14:33:11 -- nvmf/common.sh@157 -- # true 00:27:05.894 14:33:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:05.894 Cannot find device "nvmf_tgt_br2" 00:27:05.894 14:33:11 -- nvmf/common.sh@158 -- # true 00:27:05.894 14:33:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:05.894 14:33:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:05.894 14:33:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:05.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:05.894 14:33:11 -- nvmf/common.sh@161 -- # true 00:27:05.894 14:33:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:05.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:05.894 14:33:11 -- nvmf/common.sh@162 -- # true 00:27:05.894 14:33:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:05.894 14:33:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:05.894 14:33:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:05.894 14:33:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:05.894 14:33:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:05.894 14:33:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:05.894 14:33:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:05.894 14:33:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:05.894 14:33:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:05.894 14:33:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:05.894 14:33:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:05.894 14:33:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:05.894 14:33:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:05.894 14:33:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:05.894 14:33:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:05.894 14:33:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:05.894 14:33:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:05.894 14:33:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:05.894 14:33:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:05.894 14:33:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:05.894 14:33:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:05.894 14:33:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:05.894 14:33:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:05.894 14:33:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:05.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:27:05.894 00:27:05.894 --- 10.0.0.2 ping statistics --- 00:27:05.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.894 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:05.894 14:33:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:05.894 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:05.894 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:27:05.894 00:27:05.894 --- 10.0.0.3 ping statistics --- 00:27:05.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.894 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:05.894 14:33:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:05.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:27:05.894 00:27:05.894 --- 10.0.0.1 ping statistics --- 00:27:05.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.894 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:27:05.894 14:33:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.894 14:33:11 -- nvmf/common.sh@421 -- # return 0 00:27:05.894 14:33:11 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:05.894 14:33:11 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:06.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:06.831 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:06.831 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:27:06.831 14:33:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.831 14:33:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:06.831 14:33:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:06.831 14:33:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.831 14:33:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:06.831 14:33:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:06.831 14:33:12 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:27:06.831 14:33:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:06.831 14:33:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.831 14:33:12 -- common/autotest_common.sh@10 -- # set +x 00:27:06.831 14:33:12 -- nvmf/common.sh@469 -- # nvmfpid=103534 00:27:06.831 14:33:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:06.831 14:33:12 -- nvmf/common.sh@470 -- # waitforlisten 103534 00:27:06.831 14:33:12 -- common/autotest_common.sh@829 -- # '[' -z 103534 ']' 00:27:06.831 14:33:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.831 14:33:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.831 14:33:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.831 14:33:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.831 14:33:12 -- common/autotest_common.sh@10 -- # set +x 00:27:06.831 [2024-12-05 14:33:12.455173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.831 [2024-12-05 14:33:12.455280] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.090 [2024-12-05 14:33:12.600512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.090 [2024-12-05 14:33:12.691792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:07.090 [2024-12-05 14:33:12.691998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.090 [2024-12-05 14:33:12.692017] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.090 [2024-12-05 14:33:12.692042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.090 [2024-12-05 14:33:12.692251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.090 [2024-12-05 14:33:12.692794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.090 [2024-12-05 14:33:12.693369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.090 [2024-12-05 14:33:12.693452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.027 14:33:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.027 14:33:13 -- common/autotest_common.sh@862 -- # return 0 00:27:08.027 14:33:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:08.027 14:33:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.027 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.027 14:33:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.027 14:33:13 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:08.027 14:33:13 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:27:08.027 14:33:13 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:27:08.027 14:33:13 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:08.027 14:33:13 -- scripts/common.sh@312 -- # local nvmes 00:27:08.027 14:33:13 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:08.027 14:33:13 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:08.027 14:33:13 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:08.027 14:33:13 -- scripts/common.sh@297 -- # local bdf= 00:27:08.027 14:33:13 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:08.027 14:33:13 -- scripts/common.sh@232 -- # local class 00:27:08.027 14:33:13 -- scripts/common.sh@233 -- # local subclass 00:27:08.027 14:33:13 -- scripts/common.sh@234 -- # local progif 00:27:08.027 14:33:13 -- scripts/common.sh@235 -- # printf %02x 1 00:27:08.027 14:33:13 -- scripts/common.sh@235 -- # class=01 00:27:08.027 14:33:13 -- scripts/common.sh@236 -- # printf %02x 8 00:27:08.027 14:33:13 -- scripts/common.sh@236 -- # subclass=08 00:27:08.027 14:33:13 -- scripts/common.sh@237 -- # printf %02x 2 00:27:08.027 14:33:13 -- scripts/common.sh@237 -- # progif=02 00:27:08.027 14:33:13 -- scripts/common.sh@239 -- # hash lspci 00:27:08.027 14:33:13 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:08.027 14:33:13 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:08.027 14:33:13 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:08.027 14:33:13 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:08.027 14:33:13 -- scripts/common.sh@244 -- # tr -d '"' 00:27:08.027 14:33:13 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:08.027 14:33:13 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:08.027 14:33:13 -- scripts/common.sh@15 -- # local i 00:27:08.027 14:33:13 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:08.027 14:33:13 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:08.027 14:33:13 -- scripts/common.sh@24 -- # return 0 00:27:08.027 14:33:13 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:08.027 14:33:13 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:08.027 14:33:13 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:27:08.027 14:33:13 -- scripts/common.sh@15 -- # local i 00:27:08.027 14:33:13 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:27:08.027 14:33:13 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:08.027 14:33:13 -- scripts/common.sh@24 -- # return 0 00:27:08.027 14:33:13 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:27:08.028 14:33:13 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:08.028 14:33:13 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:08.028 14:33:13 -- scripts/common.sh@322 -- # uname -s 00:27:08.028 14:33:13 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:08.028 14:33:13 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:08.028 14:33:13 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:08.028 14:33:13 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:27:08.028 14:33:13 -- scripts/common.sh@322 -- # uname -s 00:27:08.028 14:33:13 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:08.028 14:33:13 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:08.028 14:33:13 -- scripts/common.sh@327 -- # (( 2 )) 00:27:08.028 14:33:13 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:27:08.028 14:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:08.028 14:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.028 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.028 ************************************ 00:27:08.028 START TEST spdk_target_abort 00:27:08.028 ************************************ 00:27:08.028 14:33:13 -- common/autotest_common.sh@1114 -- # spdk_target 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:27:08.028 14:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.028 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.028 spdk_targetn1 00:27:08.028 14:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.028 14:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.028 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.028 [2024-12-05 14:33:13.653953] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.028 14:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:27:08.028 14:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.028 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.028 14:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.028 14:33:13 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:27:08.028 14:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.028 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.287 14:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:27:08.287 14:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.287 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:27:08.287 [2024-12-05 14:33:13.682172] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.287 14:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:08.287 14:33:13 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:11.573 Initializing NVMe Controllers 00:27:11.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:11.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:11.573 Initialization complete. Launching workers. 00:27:11.573 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11236, failed: 0 00:27:11.573 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1082, failed to submit 10154 00:27:11.573 success 783, unsuccess 299, failed 0 00:27:11.573 14:33:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:11.573 14:33:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:14.919 Initializing NVMe Controllers 00:27:14.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:14.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:14.919 Initialization complete. Launching workers. 00:27:14.919 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5984, failed: 0 00:27:14.919 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1258, failed to submit 4726 00:27:14.919 success 233, unsuccess 1025, failed 0 00:27:14.919 14:33:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:14.919 14:33:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:18.211 Initializing NVMe Controllers 00:27:18.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:18.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:18.211 Initialization complete. Launching workers. 00:27:18.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30869, failed: 0 00:27:18.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2610, failed to submit 28259 00:27:18.211 success 488, unsuccess 2122, failed 0 00:27:18.211 14:33:23 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:18.211 14:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.211 14:33:23 -- common/autotest_common.sh@10 -- # set +x 00:27:18.211 14:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.211 14:33:23 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:18.211 14:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.211 14:33:23 -- common/autotest_common.sh@10 -- # set +x 00:27:18.471 14:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.471 14:33:23 -- target/abort_qd_sizes.sh@62 -- # killprocess 103534 00:27:18.471 14:33:23 -- common/autotest_common.sh@936 -- # '[' -z 103534 ']' 00:27:18.471 14:33:23 -- common/autotest_common.sh@940 -- # kill -0 103534 00:27:18.471 14:33:23 -- common/autotest_common.sh@941 -- # uname 00:27:18.471 14:33:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:18.471 14:33:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103534 00:27:18.471 14:33:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:18.471 14:33:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:18.471 killing process with pid 103534 00:27:18.471 14:33:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103534' 00:27:18.471 14:33:23 -- common/autotest_common.sh@955 -- # kill 103534 00:27:18.471 14:33:23 -- common/autotest_common.sh@960 -- # wait 103534 00:27:18.730 00:27:18.730 real 0m10.582s 00:27:18.730 user 0m43.025s 00:27:18.730 sys 0m1.844s 00:27:18.730 14:33:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:18.730 14:33:24 -- common/autotest_common.sh@10 -- # set +x 00:27:18.730 ************************************ 00:27:18.730 END TEST spdk_target_abort 00:27:18.730 ************************************ 00:27:18.730 14:33:24 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:18.730 14:33:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:18.730 14:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:18.730 14:33:24 -- common/autotest_common.sh@10 -- # set +x 00:27:18.730 ************************************ 00:27:18.730 START TEST kernel_target_abort 00:27:18.730 ************************************ 00:27:18.730 14:33:24 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:18.730 14:33:24 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:18.730 14:33:24 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:18.730 14:33:24 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:18.730 14:33:24 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:18.730 14:33:24 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:18.730 14:33:24 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:18.730 14:33:24 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:18.730 14:33:24 -- nvmf/common.sh@627 -- # local block nvme 00:27:18.730 14:33:24 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:18.730 14:33:24 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:18.730 14:33:24 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:18.730 14:33:24 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:18.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.989 Waiting for block devices as requested 00:27:18.989 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:19.256 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:19.257 14:33:24 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:19.257 14:33:24 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:19.257 14:33:24 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:19.257 14:33:24 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:19.257 14:33:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:19.257 No valid GPT data, bailing 00:27:19.257 14:33:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:19.257 14:33:24 -- scripts/common.sh@393 -- # pt= 00:27:19.257 14:33:24 -- scripts/common.sh@394 -- # return 1 00:27:19.257 14:33:24 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:19.257 14:33:24 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:19.257 14:33:24 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:19.257 14:33:24 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:19.257 14:33:24 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:19.257 14:33:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:19.257 No valid GPT data, bailing 00:27:19.523 14:33:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:19.523 14:33:24 -- scripts/common.sh@393 -- # pt= 00:27:19.523 14:33:24 -- scripts/common.sh@394 -- # return 1 00:27:19.523 14:33:24 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:19.523 14:33:24 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:19.523 14:33:24 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:19.524 14:33:24 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:19.524 14:33:24 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:19.524 14:33:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:19.524 No valid GPT data, bailing 00:27:19.524 14:33:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:19.524 14:33:24 -- scripts/common.sh@393 -- # pt= 00:27:19.524 14:33:24 -- scripts/common.sh@394 -- # return 1 00:27:19.524 14:33:24 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:19.524 14:33:24 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:19.524 14:33:24 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:19.524 14:33:24 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:19.524 14:33:24 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:19.524 14:33:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:19.524 No valid GPT data, bailing 00:27:19.524 14:33:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:19.524 14:33:25 -- scripts/common.sh@393 -- # pt= 00:27:19.524 14:33:25 -- scripts/common.sh@394 -- # return 1 00:27:19.524 14:33:25 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:19.524 14:33:25 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:19.524 14:33:25 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:19.524 14:33:25 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:19.524 14:33:25 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:19.524 14:33:25 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:19.524 14:33:25 -- nvmf/common.sh@654 -- # echo 1 00:27:19.524 14:33:25 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:19.524 14:33:25 -- nvmf/common.sh@656 -- # echo 1 00:27:19.524 14:33:25 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:19.524 14:33:25 -- nvmf/common.sh@663 -- # echo tcp 00:27:19.525 14:33:25 -- nvmf/common.sh@664 -- # echo 4420 00:27:19.525 14:33:25 -- nvmf/common.sh@665 -- # echo ipv4 00:27:19.525 14:33:25 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:19.525 14:33:25 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:14bce80c-f069-437f-874b-17c4f2b14e5c --hostid=14bce80c-f069-437f-874b-17c4f2b14e5c -a 10.0.0.1 -t tcp -s 4420 00:27:19.525 00:27:19.525 Discovery Log Number of Records 2, Generation counter 2 00:27:19.525 =====Discovery Log Entry 0====== 00:27:19.525 trtype: tcp 00:27:19.525 adrfam: ipv4 00:27:19.525 subtype: current discovery subsystem 00:27:19.525 treq: not specified, sq flow control disable supported 00:27:19.525 portid: 1 00:27:19.525 trsvcid: 4420 00:27:19.525 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:19.525 traddr: 10.0.0.1 00:27:19.525 eflags: none 00:27:19.525 sectype: none 00:27:19.525 =====Discovery Log Entry 1====== 00:27:19.525 trtype: tcp 00:27:19.525 adrfam: ipv4 00:27:19.525 subtype: nvme subsystem 00:27:19.525 treq: not specified, sq flow control disable supported 00:27:19.525 portid: 1 00:27:19.525 trsvcid: 4420 00:27:19.525 subnqn: kernel_target 00:27:19.525 traddr: 10.0.0.1 00:27:19.525 eflags: none 00:27:19.525 sectype: none 00:27:19.525 14:33:25 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:19.525 14:33:25 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:19.525 14:33:25 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:19.525 14:33:25 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:19.525 14:33:25 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.526 14:33:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:22.815 Initializing NVMe Controllers 00:27:22.815 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:22.815 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:22.815 Initialization complete. Launching workers. 00:27:22.815 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31668, failed: 0 00:27:22.815 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31668, failed to submit 0 00:27:22.815 success 0, unsuccess 31668, failed 0 00:27:22.815 14:33:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:22.815 14:33:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:26.098 Initializing NVMe Controllers 00:27:26.098 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:26.098 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:26.098 Initialization complete. Launching workers. 00:27:26.098 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66491, failed: 0 00:27:26.098 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27313, failed to submit 39178 00:27:26.098 success 0, unsuccess 27313, failed 0 00:27:26.098 14:33:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:26.098 14:33:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:29.381 Initializing NVMe Controllers 00:27:29.381 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:29.381 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:29.381 Initialization complete. Launching workers. 00:27:29.381 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 71574, failed: 0 00:27:29.381 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17854, failed to submit 53720 00:27:29.381 success 0, unsuccess 17854, failed 0 00:27:29.381 14:33:34 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:29.381 14:33:34 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:29.381 14:33:34 -- nvmf/common.sh@677 -- # echo 0 00:27:29.381 14:33:34 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:29.381 14:33:34 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:29.381 14:33:34 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:29.381 14:33:34 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:29.381 14:33:34 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:29.381 14:33:34 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:29.381 00:27:29.381 real 0m10.449s 00:27:29.381 user 0m4.979s 00:27:29.381 sys 0m2.764s 00:27:29.381 14:33:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.381 14:33:34 -- common/autotest_common.sh@10 -- # set +x 00:27:29.381 ************************************ 00:27:29.381 END TEST kernel_target_abort 00:27:29.381 ************************************ 00:27:29.381 14:33:34 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:29.381 14:33:34 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:29.381 14:33:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:29.381 14:33:34 -- nvmf/common.sh@116 -- # sync 00:27:29.381 14:33:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:29.381 14:33:34 -- nvmf/common.sh@119 -- # set +e 00:27:29.381 14:33:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:29.381 14:33:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:29.381 rmmod nvme_tcp 00:27:29.381 rmmod nvme_fabrics 00:27:29.381 rmmod nvme_keyring 00:27:29.381 14:33:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:29.381 14:33:34 -- nvmf/common.sh@123 -- # set -e 00:27:29.381 14:33:34 -- nvmf/common.sh@124 -- # return 0 00:27:29.381 14:33:34 -- nvmf/common.sh@477 -- # '[' -n 103534 ']' 00:27:29.381 14:33:34 -- nvmf/common.sh@478 -- # killprocess 103534 00:27:29.381 14:33:34 -- common/autotest_common.sh@936 -- # '[' -z 103534 ']' 00:27:29.381 14:33:34 -- common/autotest_common.sh@940 -- # kill -0 103534 00:27:29.381 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103534) - No such process 00:27:29.381 Process with pid 103534 is not found 00:27:29.381 14:33:34 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103534 is not found' 00:27:29.381 14:33:34 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:29.381 14:33:34 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:29.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:29.960 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:30.221 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:30.221 14:33:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:30.221 14:33:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:30.221 14:33:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.221 14:33:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:30.221 14:33:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.221 14:33:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:30.221 14:33:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.221 14:33:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:30.221 00:27:30.221 real 0m24.784s 00:27:30.221 user 0m49.574s 00:27:30.221 sys 0m6.016s 00:27:30.221 ************************************ 00:27:30.221 END TEST nvmf_abort_qd_sizes 00:27:30.221 ************************************ 00:27:30.221 14:33:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:30.221 14:33:35 -- common/autotest_common.sh@10 -- # set +x 00:27:30.221 14:33:35 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:30.221 14:33:35 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:30.221 14:33:35 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:30.221 14:33:35 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:30.221 14:33:35 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:30.221 14:33:35 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:30.221 14:33:35 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:30.221 14:33:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.221 14:33:35 -- common/autotest_common.sh@10 -- # set +x 00:27:30.221 14:33:35 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:30.221 14:33:35 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:30.221 14:33:35 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:30.221 14:33:35 -- common/autotest_common.sh@10 -- # set +x 00:27:32.173 INFO: APP EXITING 00:27:32.173 INFO: killing all VMs 00:27:32.173 INFO: killing vhost app 00:27:32.173 INFO: EXIT DONE 00:27:32.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:32.999 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:32.999 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:33.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:33.567 Cleaning 00:27:33.567 Removing: /var/run/dpdk/spdk0/config 00:27:33.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:33.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:33.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:33.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:33.826 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:33.826 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:33.826 Removing: /var/run/dpdk/spdk1/config 00:27:33.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:33.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:33.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:33.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:33.826 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:33.826 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:33.826 Removing: /var/run/dpdk/spdk2/config 00:27:33.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:33.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:33.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:33.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:33.826 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:33.826 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:33.826 Removing: /var/run/dpdk/spdk3/config 00:27:33.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:33.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:33.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:33.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:33.826 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:33.826 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:33.826 Removing: /var/run/dpdk/spdk4/config 00:27:33.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:33.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:33.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:33.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:33.826 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:33.826 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:33.826 Removing: /dev/shm/nvmf_trace.0 00:27:33.826 Removing: /dev/shm/spdk_tgt_trace.pid67579 00:27:33.826 Removing: /var/run/dpdk/spdk0 00:27:33.826 Removing: /var/run/dpdk/spdk1 00:27:33.826 Removing: /var/run/dpdk/spdk2 00:27:33.826 Removing: /var/run/dpdk/spdk3 00:27:33.826 Removing: /var/run/dpdk/spdk4 00:27:33.826 Removing: /var/run/dpdk/spdk_pid100502 00:27:33.826 Removing: /var/run/dpdk/spdk_pid100706 00:27:33.826 Removing: /var/run/dpdk/spdk_pid100998 00:27:33.826 Removing: /var/run/dpdk/spdk_pid101309 00:27:33.826 Removing: /var/run/dpdk/spdk_pid101854 00:27:33.826 Removing: /var/run/dpdk/spdk_pid101859 00:27:33.826 Removing: /var/run/dpdk/spdk_pid102235 00:27:33.826 Removing: /var/run/dpdk/spdk_pid102390 00:27:33.826 Removing: /var/run/dpdk/spdk_pid102548 00:27:33.826 Removing: /var/run/dpdk/spdk_pid102648 00:27:33.826 Removing: /var/run/dpdk/spdk_pid102814 00:27:33.826 Removing: /var/run/dpdk/spdk_pid102923 00:27:33.826 Removing: /var/run/dpdk/spdk_pid103603 00:27:33.826 Removing: /var/run/dpdk/spdk_pid103637 00:27:33.826 Removing: /var/run/dpdk/spdk_pid103675 00:27:33.826 Removing: /var/run/dpdk/spdk_pid103924 00:27:33.826 Removing: /var/run/dpdk/spdk_pid103959 00:27:33.826 Removing: /var/run/dpdk/spdk_pid103991 00:27:33.826 Removing: /var/run/dpdk/spdk_pid67422 00:27:33.826 Removing: /var/run/dpdk/spdk_pid67579 00:27:33.826 Removing: /var/run/dpdk/spdk_pid67901 00:27:33.826 Removing: /var/run/dpdk/spdk_pid68170 00:27:33.826 Removing: /var/run/dpdk/spdk_pid68353 00:27:33.826 Removing: /var/run/dpdk/spdk_pid68442 00:27:33.826 Removing: /var/run/dpdk/spdk_pid68535 00:27:33.826 Removing: /var/run/dpdk/spdk_pid68632 00:27:33.827 Removing: /var/run/dpdk/spdk_pid68676 00:27:33.827 Removing: /var/run/dpdk/spdk_pid68706 00:27:33.827 Removing: /var/run/dpdk/spdk_pid68769 00:27:33.827 Removing: /var/run/dpdk/spdk_pid68873 00:27:33.827 Removing: /var/run/dpdk/spdk_pid69505 00:27:33.827 Removing: /var/run/dpdk/spdk_pid69563 00:27:33.827 Removing: /var/run/dpdk/spdk_pid69632 00:27:33.827 Removing: /var/run/dpdk/spdk_pid69660 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69734 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69762 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69841 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69869 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69920 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69950 00:27:34.085 Removing: /var/run/dpdk/spdk_pid69998 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70026 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70185 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70221 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70297 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70374 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70404 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70457 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70482 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70511 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70531 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70565 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70579 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70619 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70635 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70670 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70691 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70720 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70740 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70774 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70794 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70828 00:27:34.085 Removing: /var/run/dpdk/spdk_pid70843 00:27:34.086 Removing: /var/run/dpdk/spdk_pid70878 00:27:34.086 Removing: /var/run/dpdk/spdk_pid70897 00:27:34.086 Removing: /var/run/dpdk/spdk_pid70932 00:27:34.086 Removing: /var/run/dpdk/spdk_pid70946 00:27:34.086 Removing: /var/run/dpdk/spdk_pid70986 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71000 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71034 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71054 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71083 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71108 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71137 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71151 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71191 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71205 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71240 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71259 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71288 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71316 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71348 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71376 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71414 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71433 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71462 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71482 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71517 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71594 00:27:34.086 Removing: /var/run/dpdk/spdk_pid71693 00:27:34.086 Removing: /var/run/dpdk/spdk_pid72130 00:27:34.086 Removing: /var/run/dpdk/spdk_pid79097 00:27:34.086 Removing: /var/run/dpdk/spdk_pid79453 00:27:34.086 Removing: /var/run/dpdk/spdk_pid81905 00:27:34.086 Removing: /var/run/dpdk/spdk_pid82287 00:27:34.086 Removing: /var/run/dpdk/spdk_pid82558 00:27:34.086 Removing: /var/run/dpdk/spdk_pid82598 00:27:34.086 Removing: /var/run/dpdk/spdk_pid82915 00:27:34.086 Removing: /var/run/dpdk/spdk_pid82965 00:27:34.086 Removing: /var/run/dpdk/spdk_pid83354 00:27:34.086 Removing: /var/run/dpdk/spdk_pid83885 00:27:34.086 Removing: /var/run/dpdk/spdk_pid84313 00:27:34.086 Removing: /var/run/dpdk/spdk_pid85286 00:27:34.086 Removing: /var/run/dpdk/spdk_pid86286 00:27:34.086 Removing: /var/run/dpdk/spdk_pid86399 00:27:34.086 Removing: /var/run/dpdk/spdk_pid86467 00:27:34.086 Removing: /var/run/dpdk/spdk_pid87948 00:27:34.086 Removing: /var/run/dpdk/spdk_pid88203 00:27:34.086 Removing: /var/run/dpdk/spdk_pid88650 00:27:34.086 Removing: /var/run/dpdk/spdk_pid88762 00:27:34.086 Removing: /var/run/dpdk/spdk_pid88909 00:27:34.086 Removing: /var/run/dpdk/spdk_pid88959 00:27:34.345 Removing: /var/run/dpdk/spdk_pid89000 00:27:34.345 Removing: /var/run/dpdk/spdk_pid89046 00:27:34.345 Removing: /var/run/dpdk/spdk_pid89209 00:27:34.345 Removing: /var/run/dpdk/spdk_pid89356 00:27:34.345 Removing: /var/run/dpdk/spdk_pid89620 00:27:34.345 Removing: /var/run/dpdk/spdk_pid89743 00:27:34.345 Removing: /var/run/dpdk/spdk_pid90167 00:27:34.345 Removing: /var/run/dpdk/spdk_pid90555 00:27:34.345 Removing: /var/run/dpdk/spdk_pid90564 00:27:34.345 Removing: /var/run/dpdk/spdk_pid92822 00:27:34.345 Removing: /var/run/dpdk/spdk_pid93133 00:27:34.345 Removing: /var/run/dpdk/spdk_pid93644 00:27:34.345 Removing: /var/run/dpdk/spdk_pid93652 00:27:34.345 Removing: /var/run/dpdk/spdk_pid93996 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94016 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94034 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94066 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94071 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94212 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94219 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94322 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94334 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94438 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94440 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94923 00:27:34.345 Removing: /var/run/dpdk/spdk_pid94972 00:27:34.345 Removing: /var/run/dpdk/spdk_pid95123 00:27:34.345 Removing: /var/run/dpdk/spdk_pid95245 00:27:34.345 Removing: /var/run/dpdk/spdk_pid95651 00:27:34.345 Removing: /var/run/dpdk/spdk_pid95897 00:27:34.345 Removing: /var/run/dpdk/spdk_pid96400 00:27:34.345 Removing: /var/run/dpdk/spdk_pid96964 00:27:34.345 Removing: /var/run/dpdk/spdk_pid97410 00:27:34.345 Removing: /var/run/dpdk/spdk_pid97505 00:27:34.345 Removing: /var/run/dpdk/spdk_pid97576 00:27:34.345 Removing: /var/run/dpdk/spdk_pid97663 00:27:34.345 Removing: /var/run/dpdk/spdk_pid97826 00:27:34.345 Removing: /var/run/dpdk/spdk_pid97918 00:27:34.345 Removing: /var/run/dpdk/spdk_pid98003 00:27:34.345 Removing: /var/run/dpdk/spdk_pid98093 00:27:34.345 Removing: /var/run/dpdk/spdk_pid98433 00:27:34.345 Removing: /var/run/dpdk/spdk_pid99140 00:27:34.345 Clean 00:27:34.345 killing process with pid 61819 00:27:34.603 killing process with pid 61823 00:27:34.603 14:33:40 -- common/autotest_common.sh@1446 -- # return 0 00:27:34.603 14:33:40 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:34.603 14:33:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.603 14:33:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.603 14:33:40 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:34.603 14:33:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.603 14:33:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.603 14:33:40 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:34.603 14:33:40 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:34.603 14:33:40 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:34.603 14:33:40 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:34.603 14:33:40 -- spdk/autotest.sh@383 -- # hostname 00:27:34.603 14:33:40 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:34.862 geninfo: WARNING: invalid characters removed from testname! 00:27:56.792 14:33:59 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:57.050 14:34:02 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:58.980 14:34:04 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:01.508 14:34:06 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:03.411 14:34:08 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:05.316 14:34:10 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:07.881 14:34:12 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:07.881 14:34:13 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:07.881 14:34:13 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:07.881 14:34:13 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:07.881 14:34:13 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:07.881 14:34:13 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:07.881 14:34:13 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:07.881 14:34:13 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:07.881 14:34:13 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:07.881 14:34:13 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:07.881 14:34:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:07.881 14:34:13 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:07.881 14:34:13 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:07.881 14:34:13 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:07.881 14:34:13 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:07.881 14:34:13 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:07.881 14:34:13 -- scripts/common.sh@343 -- $ case "$op" in 00:28:07.881 14:34:13 -- scripts/common.sh@344 -- $ : 1 00:28:07.881 14:34:13 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:07.881 14:34:13 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.881 14:34:13 -- scripts/common.sh@364 -- $ decimal 1 00:28:07.881 14:34:13 -- scripts/common.sh@352 -- $ local d=1 00:28:07.881 14:34:13 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:07.881 14:34:13 -- scripts/common.sh@354 -- $ echo 1 00:28:07.881 14:34:13 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:07.881 14:34:13 -- scripts/common.sh@365 -- $ decimal 2 00:28:07.881 14:34:13 -- scripts/common.sh@352 -- $ local d=2 00:28:07.881 14:34:13 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:07.881 14:34:13 -- scripts/common.sh@354 -- $ echo 2 00:28:07.881 14:34:13 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:07.881 14:34:13 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:07.881 14:34:13 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:07.881 14:34:13 -- scripts/common.sh@367 -- $ return 0 00:28:07.881 14:34:13 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.881 14:34:13 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:07.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.881 --rc genhtml_branch_coverage=1 00:28:07.881 --rc genhtml_function_coverage=1 00:28:07.881 --rc genhtml_legend=1 00:28:07.881 --rc geninfo_all_blocks=1 00:28:07.881 --rc geninfo_unexecuted_blocks=1 00:28:07.881 00:28:07.881 ' 00:28:07.881 14:34:13 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:07.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.881 --rc genhtml_branch_coverage=1 00:28:07.881 --rc genhtml_function_coverage=1 00:28:07.881 --rc genhtml_legend=1 00:28:07.881 --rc geninfo_all_blocks=1 00:28:07.881 --rc geninfo_unexecuted_blocks=1 00:28:07.881 00:28:07.881 ' 00:28:07.881 14:34:13 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:07.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.881 --rc genhtml_branch_coverage=1 00:28:07.881 --rc genhtml_function_coverage=1 00:28:07.881 --rc genhtml_legend=1 00:28:07.881 --rc geninfo_all_blocks=1 00:28:07.881 --rc geninfo_unexecuted_blocks=1 00:28:07.881 00:28:07.881 ' 00:28:07.881 14:34:13 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:07.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.881 --rc genhtml_branch_coverage=1 00:28:07.881 --rc genhtml_function_coverage=1 00:28:07.881 --rc genhtml_legend=1 00:28:07.881 --rc geninfo_all_blocks=1 00:28:07.881 --rc geninfo_unexecuted_blocks=1 00:28:07.881 00:28:07.881 ' 00:28:07.881 14:34:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:07.881 14:34:13 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:07.881 14:34:13 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.881 14:34:13 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.881 14:34:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.881 14:34:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.881 14:34:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.881 14:34:13 -- paths/export.sh@5 -- $ export PATH 00:28:07.881 14:34:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.881 14:34:13 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:07.881 14:34:13 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:07.881 14:34:13 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733409253.XXXXXX 00:28:07.881 14:34:13 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733409253.zI9qeQ 00:28:07.881 14:34:13 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:07.881 14:34:13 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:28:07.881 14:34:13 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:28:07.881 14:34:13 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:28:07.881 14:34:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:07.881 14:34:13 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:07.881 14:34:13 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:07.881 14:34:13 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:07.881 14:34:13 -- common/autotest_common.sh@10 -- $ set +x 00:28:07.881 14:34:13 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:28:07.881 14:34:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:07.881 14:34:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:07.881 14:34:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:07.881 14:34:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:07.881 14:34:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:07.881 14:34:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:07.881 14:34:13 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:07.881 14:34:13 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:07.881 14:34:13 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:07.881 14:34:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:07.881 + [[ -n 5963 ]] 00:28:07.881 + sudo kill 5963 00:28:07.891 [Pipeline] } 00:28:07.907 [Pipeline] // timeout 00:28:07.913 [Pipeline] } 00:28:07.928 [Pipeline] // stage 00:28:07.933 [Pipeline] } 00:28:07.948 [Pipeline] // catchError 00:28:07.957 [Pipeline] stage 00:28:07.960 [Pipeline] { (Stop VM) 00:28:07.972 [Pipeline] sh 00:28:08.254 + vagrant halt 00:28:11.543 ==> default: Halting domain... 00:28:18.173 [Pipeline] sh 00:28:18.454 + vagrant destroy -f 00:28:20.987 ==> default: Removing domain... 00:28:21.000 [Pipeline] sh 00:28:21.284 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:21.362 [Pipeline] } 00:28:21.379 [Pipeline] // stage 00:28:21.386 [Pipeline] } 00:28:21.402 [Pipeline] // dir 00:28:21.408 [Pipeline] } 00:28:21.425 [Pipeline] // wrap 00:28:21.433 [Pipeline] } 00:28:21.447 [Pipeline] // catchError 00:28:21.457 [Pipeline] stage 00:28:21.459 [Pipeline] { (Epilogue) 00:28:21.474 [Pipeline] sh 00:28:21.782 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:25.979 [Pipeline] catchError 00:28:25.982 [Pipeline] { 00:28:25.995 [Pipeline] sh 00:28:26.276 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:26.535 Artifacts sizes are good 00:28:26.557 [Pipeline] } 00:28:26.573 [Pipeline] // catchError 00:28:26.587 [Pipeline] archiveArtifacts 00:28:26.595 Archiving artifacts 00:28:26.714 [Pipeline] cleanWs 00:28:26.726 [WS-CLEANUP] Deleting project workspace... 00:28:26.726 [WS-CLEANUP] Deferred wipeout is used... 00:28:26.733 [WS-CLEANUP] done 00:28:26.735 [Pipeline] } 00:28:26.752 [Pipeline] // stage 00:28:26.757 [Pipeline] } 00:28:26.772 [Pipeline] // node 00:28:26.778 [Pipeline] End of Pipeline 00:28:26.820 Finished: SUCCESS